Implementation proposal #3

Open
cheald opened this Issue Feb 2, 2013 · 22 comments

Comments

Projects
None yet
10 participants
@cheald

cheald commented Feb 2, 2013

I'm not sure where else to stick this, so I'll leave it here! (Originally at https://gist.github.com/4696144)

Root Key & Signing System

A single X509 key is generated per distribution platform (Rubygems.org, Gemfury, etc). This key is used to sign gem author requests.

A gem author may generate a certificate and request that the platform sign it. Alice generates her x509 keypair with her email address encoded as the x509 name field, stashes the private key somewhere safe, and submits the pubkey to the signing system.

The signing system consists of two parts:

  1. [Machine A] A web UI (or email inbox) responsible for accepting public keys and sending emails
  2. [Machine B] A signing machine with a shared data store (shared NFS mount, redis store, whatever - it must simply be a data store to act as a dead drop)

The UI accepts pubkeys, ensures their validity, parses the certificate for the name field, and sends a verification email to the email specified in the name field. The email contains a link with a cryptographic signature (something like an HMAC of the pubkey). The email owner clicks this link (or replies to the email) which causes Machine A to validate the response and put the affiliated pubkey into the dead-drop inbox.

Machine B is monitoring the inbox for pubkeys. Once a key is received, it is signed, and placed in the dead-drop outbox.

Machine A monitors the outbox for signed keys. It again parses the key for the name field again, encrypts the signed key with itself, and emails it to the name field in the record.

Alice retrieves the key from her email inbox, decrypts it with her private key, and then may use it to sign her gems.

This system could have an exceptionally small attack surface, consisting of only a minimal mailserver (A) and a local-only daemon (B) which operate on shared storage (on either A or B, or on a third server, C).

Gem certificate chain history server

A separate server ["Chain of trust history server"] maintains and validates cert chain history for all gems on Rubygems.org, but which is queryable by Rubygems-bin, allowing Rubygems-bin to obtain the last known and verified certificate chain for a given gem when installing in the event that no local history is known.

It must be separate from the Rubygems.org platform in order to avoid allowing a compromise of Rubygems.org to be pivoted into a compromise of the certificate history system, allowing an attacker to upload fraudulent certificates.

This system would naturally serve as an automated IDS, as well, and could raise an alarm if it ever discovered that Rubygems.org had accepted a gem without a valid certificate chain, indicating a breach of the system's certificate verification mechanisms.

Gem Signing

The gem is signed with something like:

s.signing_key = File.expand_path("~/.gemtrust/.gem-private_cert.pem")
s.cert_chain  = ['rubygems-public_cert.pem', 'alice-public_cert.pem']

Alice may then upload her gem to Rubygems.org. Upon receipt of the gem, Rubygems.org ensures that the gem has been signed with a cert chain terminating in a certificate that it knows about and trusts. Additionally, it will ensure that the gem is signed with a certificate containing an email that matches the email on the account of the system.

Rubygems(-bin) will maintain a local history of certificate chains for a gem. If a certificate is removed (without a signed authorization), then it will refuse to install the gem, suggest review, and require a user override to proceed. Rubygems.org will additionally maintain this certificate chain, and refuse to accept a gem that does not include the owning account's email as a part of the chain of trust. This ensures:

  • If an individual Rubygems.org account is compromised (but not the legitimate owner's private key), then a malicious entity cannot upload a modified gem into the account.
  • If an individual Rubygems.org account is comprimised, and the attacker has been able to forge a key with the account's email, then the attacker can upload new gems into the account, but cannot publish new versions of existing gems, as they will fail to validate the chain of trust history.
  • If Rubygems as a whole is compromised, then the attacker may be able to upload a malicious gem. However, Rubygems-bin will refuse to install any newer version of it.

Rubygems will allow certificates to be added to the certificate chain, so long as they are signed by a non-root certificate in the chain. This permits for transfer of project ownership and multiple signing keys. For example:

Project transfer

Alice starts a project, Foobar, signs it with her key. The chain now looks like:

[rubygems, alice]

Alice then later abandons the project, and Charlie takes over as maintainer with Alice's blessing. Alice would generate a key re-issue signature on the project, authorizing the removal of her key, and the addition of Charlie's. The chain now looks like:

[rubygems, charlie] (alice removed with authorization)

As Alice signed Charlie's key and authorized her key's removal, she is still part of the chain of trust allowing the chain history to permit the change, the system permits installation, with the implicit understanding that Alice has blessed Charlie's key. Future releases will not need Alice's blessing.

Multi-user projects

Alice starts a popular project, which she then wants to add publishing members to while retaining publication ability herself. Initially, the trust chain is:

[rubygems, alice]

Upon wanting to add a new member, Alice generate a project master key, and authorizes key reissuance of the project using the new project master key

[rubygems, project-master] (alice removed with authorization)

Then Alice uses the project-master key to sign Charlie's key (and perhaps her own personal key):

[rubygems, project-master, charlie]

Alice may continue to publish to the project while allowing Charlie to publish to the project, without giving Charlie her personal key's trust.

Malicious cert chain modification

If Dave, a malicious actor, managed to wrest control of the project, he would be able to sign the gem, but its trust chain would look like:

[rubygems, dave]

Thus, both Rubygems.org and Rubygems-bin would reject the gem based on the gem's known certificate history, and Alice's unauthorized exclusion from the certificate chain.

Gem installation and verification

Bob, a Ruby developer, wants to use Alice's gem. Bob would install the Rubygems.org public cert as a trusted certificate:

gem cert --add rubygems-public_cert.pem

Bob may then download and install Alice's gem, and Rubygems(-bin)'s HighSecurity policy will validate and accept the gem, and permit it to install.

Certificate revocation

Before fetching a gem, Rubygems would need to fetch any certificate revocation lists. It would then check the trusted certificate list for revocations, and remove any that appear on the list. This is the primary mechanism in which a compromised CA key would be removed. Users would be required to manually install the new key in this event.

This necessitates that the Rubygems public key must be published in a location that is not connected to the CA, as a compromise of the CA could allow an attacker to revoke the otherwise-legitimate root key and publish his own for consumption.

Each time Rubygems runs a network operation, it should

  1. Check if the revocation list has changed since the last time it validated certificates for known gems.
  2. If the list has changed, validate the certificate chains for all installed gems. Prompt to remove any with invalid certificates.
  3. If step 2 was run, write a hash of the revocation list and the list of gems that passed muster.
  4. Remove any entries from the local chain of trust history that contain revoked certificates.
  5. Check for a new revocation list
  6. Run step 2 if the revocation list has changed.

This allows for certificate verifications and revocations for multiple gem installs (RVM gemsets, bundler local installs) in a given system.

Attack Surfaces

  • Installation of a malicious certificate as a trusted root certificate on a local machine would result in signatures becoming unreliable. However, given that this would require some level of ownership of the machine, it would likely be a small problem in such an event.
  • Compromise of Rubygems.org's distribution platform may result in the upload of malicious gems. Such gems would be distributed to gem installers, which would then reject the gems due to either a local failed chain of trust, or a failed chain of trust from the chain history server.
  • Compromise of the chain history server would not be exploitable to install malicious software, as the attacker must also have control of the distribution platform. MITM attacks would be viable, but if you can MITM Rubygems.org, you can MITM chain history server queries.
  • Compromise of the chain history server AND Rubygems.org would allow for attackers to upload compromised gems to Rubygems.org and distribute them to pristine installs. Upgrades would still fail due to the local chain of trust history.
  • Compromise of the Rubygems' pubkey publication platform could result in an attacker publishing his own public key, which would affect people installing the certificate for the first time. However, legitimate gems from Rubygems.org would fail to install as they were not signed with the attacker's keypair.
  • Compromise of the pubkey platform AND the Rubygems.org platform would result in failure to install due to local or queried chain of trust histories.
  • Compromise of the CA's "Machine A" would result in people being able to obtain signed keys for emails without validation. It would not expose the private key for Machine B. This would permit uploading of new gems to a compromised user account, but new versions of existing gems would fail to upload, as the key provided would not be a part of the gem's existing chain of trust history.
  • Compromise of the CA's "Machine B" would result in disclosure of the private key, requiring that the root key be revoked and reissued. This would invalidate all current gem signatures. Illicit replacement of the private key on CA's "Machine B" would result in people being issued certificates that would fail to upload to Rubygems.org, due to failure to validate the cert chain against the Rubygems.org private key.
  • Compromise of the CA and Rubygems.org would result in pristine installs being served malicious software. Upgrades would still fail due to local chain of trust history.
  • An author's stolen private key may be used to fraudulently sign requests. This may be defended against by following proper key protection measures and password-protecting the key.
  • Most raised MITM attacks can be avoided by performing Rubygems.org and chain history queries via SSL.
@YorickPeterse

This comment has been minimized.

Show comment Hide comment
@YorickPeterse

YorickPeterse Feb 2, 2013

A question I have: both in the current implementation as well as your example (maybe it's just because it's an example) the Gemspecs assume that there's only one person signing and pushing Gems. However, this is not always the case.

For example, for the "ramaze" Gem there are two people that generally push out releases: me and the main author. Only the main author being able to sign the releases would become annoying quite fast.

I'd also like to see a more user friendly UI for the commandline utility, though that's more of an implementation detail.

p.s. it might be easier to put the Gist's text directly into the issue, right now people might start commenting both here on the issue as well as on the Gist.

A question I have: both in the current implementation as well as your example (maybe it's just because it's an example) the Gemspecs assume that there's only one person signing and pushing Gems. However, this is not always the case.

For example, for the "ramaze" Gem there are two people that generally push out releases: me and the main author. Only the main author being able to sign the releases would become annoying quite fast.

I'd also like to see a more user friendly UI for the commandline utility, though that's more of an implementation detail.

p.s. it might be easier to put the Gist's text directly into the issue, right now people might start commenting both here on the issue as well as on the Gist.

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

My proposal addresses this under the "Gem Signing" section. A project may have a master signer (project owner) who may then sign others' keys, which may then be used in the cert chain. An unbroken chain back to the root cert is required for acceptance of the cert, but this allows for multiple certs to be valid for a single gem.

I moved the gist inline. Thanks.

cheald commented Feb 2, 2013

My proposal addresses this under the "Gem Signing" section. A project may have a master signer (project owner) who may then sign others' keys, which may then be used in the cert chain. An unbroken chain back to the root cert is required for acceptance of the cert, but this allows for multiple certs to be valid for a single gem.

I moved the gist inline. Thanks.

@YorickPeterse

This comment has been minimized.

Show comment Hide comment
@YorickPeterse

YorickPeterse Feb 2, 2013

A small addition to the proposal that I have: a standard directory that contains the keys of a developer. This directory would be ~/.gem/certificates for the certificates and private keys. Note that the name of the "certificates" directory is just an example, an equally decent name would be "trust", "signing", etc.

A small addition to the proposal that I have: a standard directory that contains the keys of a developer. This directory would be ~/.gem/certificates for the certificates and private keys. Note that the name of the "certificates" directory is just an example, an equally decent name would be "trust", "signing", etc.

@cbetta

This comment has been minimized.

Show comment Hide comment
@cbetta

cbetta Feb 2, 2013

So is there going to be a certificate per dev per gem, or just a certificate per dev?

cbetta commented Feb 2, 2013

So is there going to be a certificate per dev per gem, or just a certificate per dev?

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

In most cases, you could get away with one certificate per dev. The certificate may be re-used across all of that dev's projects.

However, in the case of projects that have multiple members, the founding member may want a separate keypair, and use the project private key to sign the other members' certificates. This means that Joe may own a project Foo, and invites Jane to publish. He would have a foo keypair, sign jane-pubkey with foo-privkey, and jane is then a part of the chain of trust for the project. Jane is not a part of the chain of trust for any of Joe's other projects, since he signed her pubkey with the project key, not his private key.

I'm already signing my own gems with a personal key and a company key, specifically for that kind of situation; I want my co-workers to be able to cosign company gems without them being implied as trusted on my personal projects.

In the situation where a founder initially signs with his personal key, upon adding another member, he would generate a project key, sign the project key with his personal key, and use the project key to sign any other members' keys.

Edit: As I think about it, this would be insufficient, as once Joe has signed Joe-Foo-pub and used it to sign Alice-pub, then Alice could just inject the legitimate Joe-Foo-pub into the Joe-Bar gemspec, and tack Alice-pub onto the chain, authorizing her to publish on the project. Thus, I'd switch my answer to say "one keypair per project".

It might be worth thinking about a project-specific key revocation mechanism then, where Joe could originally sign with his personal key, but upon upgrading to multiple members, he would create a new key, sign a key revocation/reissue authorization (allowing key removal to pass muster with the cert chain history), which would deauthorize his personal key and authorize the new project key, without the project key being signed by his personal key.

Personal certificates are fine as long as they aren't being used to sign other certs. Project certs should be used in those cases.

I updated the proposal accordingly.

cheald commented Feb 2, 2013

In most cases, you could get away with one certificate per dev. The certificate may be re-used across all of that dev's projects.

However, in the case of projects that have multiple members, the founding member may want a separate keypair, and use the project private key to sign the other members' certificates. This means that Joe may own a project Foo, and invites Jane to publish. He would have a foo keypair, sign jane-pubkey with foo-privkey, and jane is then a part of the chain of trust for the project. Jane is not a part of the chain of trust for any of Joe's other projects, since he signed her pubkey with the project key, not his private key.

I'm already signing my own gems with a personal key and a company key, specifically for that kind of situation; I want my co-workers to be able to cosign company gems without them being implied as trusted on my personal projects.

In the situation where a founder initially signs with his personal key, upon adding another member, he would generate a project key, sign the project key with his personal key, and use the project key to sign any other members' keys.

Edit: As I think about it, this would be insufficient, as once Joe has signed Joe-Foo-pub and used it to sign Alice-pub, then Alice could just inject the legitimate Joe-Foo-pub into the Joe-Bar gemspec, and tack Alice-pub onto the chain, authorizing her to publish on the project. Thus, I'd switch my answer to say "one keypair per project".

It might be worth thinking about a project-specific key revocation mechanism then, where Joe could originally sign with his personal key, but upon upgrading to multiple members, he would create a new key, sign a key revocation/reissue authorization (allowing key removal to pass muster with the cert chain history), which would deauthorize his personal key and authorize the new project key, without the project key being signed by his personal key.

Personal certificates are fine as long as they aren't being used to sign other certs. Project certs should be used in those cases.

I updated the proposal accordingly.

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

Question worth noodling - how whould the chain history server be discoverable? Querying it from the gem repository seems like it would be primed for failure for pristine installs, since a hijacked repository could point queries to a bogus chain history server.

Perhaps just use a single DNS convention - so that chain-history.$platform.$tld is used, and may be inferred rather than discovered? Encode the chain server as a part of the certificate metadata?

cheald commented Feb 2, 2013

Question worth noodling - how whould the chain history server be discoverable? Querying it from the gem repository seems like it would be primed for failure for pristine installs, since a hijacked repository could point queries to a bogus chain history server.

Perhaps just use a single DNS convention - so that chain-history.$platform.$tld is used, and may be inferred rather than discovered? Encode the chain server as a part of the certificate metadata?

@m-o-e

This comment has been minimized.

Show comment Hide comment
@m-o-e

m-o-e Feb 2, 2013

I don't understand the purpose of the single rooted CA in this proposal. It introduces a lot of complexity (machine A/machine B), a single point of failure, maintenance concerns, and no least an attack vector of its own (silent compromise of the signing CA).

As I understand it (and please correct me): Under pretty much every realistic attack scenario we are reduced to continuity verification in any case. I.e. the gem install command can complain when the upgrade to a gem was signed with a different cert than the previous version, but there's little we can do beyond that.

The gem install command may seek remote assistance from trusted sources (e.g. "chain history"-server) for this continuity check, but the signature from the central CA does not really help with it as it doesn't prove anything interesting that the auxiliary mechanisms (chain history, "oldest key") wouldn't also prove.

As a counter-proposal:

As long as the CA signature only proves "This user has an e-mail address", can't we as well skip the CA and default to trusting any new key by default (since this is what the CA effectively does)? The local continuity-check, cert revocation and "chain history" servers could be added in the same way, without a central CA.

I would like to hear a specific attack scenario that the centralized CA prevents, or a feature that it enables, that could not be equally solved in a decentralized approach.

Please don't get me wrong: I agree with almost everything in your proposal and it is very well written. I'm only worried about burdening the volunteers who run rubygems.org with the maintenance of such a complex infrastructure. I think this should only be done if we can enumerate some critical benefits over simpler alternatives (those benefits may very well exist, but I struggle to see them right now).

m-o-e commented Feb 2, 2013

I don't understand the purpose of the single rooted CA in this proposal. It introduces a lot of complexity (machine A/machine B), a single point of failure, maintenance concerns, and no least an attack vector of its own (silent compromise of the signing CA).

As I understand it (and please correct me): Under pretty much every realistic attack scenario we are reduced to continuity verification in any case. I.e. the gem install command can complain when the upgrade to a gem was signed with a different cert than the previous version, but there's little we can do beyond that.

The gem install command may seek remote assistance from trusted sources (e.g. "chain history"-server) for this continuity check, but the signature from the central CA does not really help with it as it doesn't prove anything interesting that the auxiliary mechanisms (chain history, "oldest key") wouldn't also prove.

As a counter-proposal:

As long as the CA signature only proves "This user has an e-mail address", can't we as well skip the CA and default to trusting any new key by default (since this is what the CA effectively does)? The local continuity-check, cert revocation and "chain history" servers could be added in the same way, without a central CA.

I would like to hear a specific attack scenario that the centralized CA prevents, or a feature that it enables, that could not be equally solved in a decentralized approach.

Please don't get me wrong: I agree with almost everything in your proposal and it is very well written. I'm only worried about burdening the volunteers who run rubygems.org with the maintenance of such a complex infrastructure. I think this should only be done if we can enumerate some critical benefits over simpler alternatives (those benefits may very well exist, but I struggle to see them right now).

@dstufft

This comment has been minimized.

Show comment Hide comment
@dstufft

dstufft Feb 2, 2013

A CA in the HTTPS sense only works because there is some level of verification that the person requesting the certificate for example.com has authorization to represent that domain. On your basic certificate levels that is done by using a simple domain validation by sending an email to a constrained list of admin like emails (e.g. root@example.com etc). That's how the automated systems for trust work. The manual reviews require validating EIN's and tax documents and such to prove identity and cost much more (because they are much more involved).

In the Ruby Gems side of things there's no simple way to automate validation that the person requesting a certificate that is valid for X project is indeed valid for that project. From what I can tell your suggestion ignores this and just offers keys on a first serve first serve basis. This is fatally flawed because the purpose of a CA is to centralize the trust decision (e.g. can I trust XYZ for the Project ABC) by moving to a first come first serve basis you remove that purpose of a CA.

The nieve approach to handling that trust is to use RubyGems.org to say "yes this user is authorized for Project ABC". However the problem with doing this is that RubyGems.org is a centralized source of authorization then. With the traditional domain based CA each organization runs their own mail servers so in order to get widespread false trust you'd need to gain access to a wide number of mail servers. However in this nieve approach All that an attacker would gain is to (again) exploit RubyGems.org to be able to generate their own trust certs for a particular gem.

The only method of validating trust that user XYZ is authorized to get trust for project ABC is through a manual review of github, twitter, email to attempt to determine if the user requesting trust is the user authorized for this project. This is further complicated when you factor in people who might not have a strong online presence releasing their first gem, or new gems in general because there may not be a good method of determining that.

Setting up secure infrastructure is very hard, however equally (or realistically, more difficult), is setting up a method of authorization that a particular user is authorized for a particular project. Without a proposal that handles adequately making that trust decision the move to a central CA makes the system weaker because it creates a very big target to be able to exploit the entire ruby community. Again the primary purpose of a Central CA is to centralize making that decision of who to trust because that's ultimately what a valid CA driven certificate is, a document stating that the CA trusts this user for this particular domain/project.

dstufft commented Feb 2, 2013

A CA in the HTTPS sense only works because there is some level of verification that the person requesting the certificate for example.com has authorization to represent that domain. On your basic certificate levels that is done by using a simple domain validation by sending an email to a constrained list of admin like emails (e.g. root@example.com etc). That's how the automated systems for trust work. The manual reviews require validating EIN's and tax documents and such to prove identity and cost much more (because they are much more involved).

In the Ruby Gems side of things there's no simple way to automate validation that the person requesting a certificate that is valid for X project is indeed valid for that project. From what I can tell your suggestion ignores this and just offers keys on a first serve first serve basis. This is fatally flawed because the purpose of a CA is to centralize the trust decision (e.g. can I trust XYZ for the Project ABC) by moving to a first come first serve basis you remove that purpose of a CA.

The nieve approach to handling that trust is to use RubyGems.org to say "yes this user is authorized for Project ABC". However the problem with doing this is that RubyGems.org is a centralized source of authorization then. With the traditional domain based CA each organization runs their own mail servers so in order to get widespread false trust you'd need to gain access to a wide number of mail servers. However in this nieve approach All that an attacker would gain is to (again) exploit RubyGems.org to be able to generate their own trust certs for a particular gem.

The only method of validating trust that user XYZ is authorized to get trust for project ABC is through a manual review of github, twitter, email to attempt to determine if the user requesting trust is the user authorized for this project. This is further complicated when you factor in people who might not have a strong online presence releasing their first gem, or new gems in general because there may not be a good method of determining that.

Setting up secure infrastructure is very hard, however equally (or realistically, more difficult), is setting up a method of authorization that a particular user is authorized for a particular project. Without a proposal that handles adequately making that trust decision the move to a central CA makes the system weaker because it creates a very big target to be able to exploit the entire ruby community. Again the primary purpose of a Central CA is to centralize making that decision of who to trust because that's ultimately what a valid CA driven certificate is, a document stating that the CA trusts this user for this particular domain/project.

@raggi

This comment has been minimized.

Show comment Hide comment
@raggi

raggi Feb 2, 2013

Raz: A central CA doesn't help prevent attacks, it provides vectors for recovery. Any of the models can be poisoned, the question is how fast and reliably you can propagate invalidation of the poisoned data.

The other thing is if we want this to help at all with anything like what happened in the last week, then folks like drbrain, mark, evan, qrush, and myself (anyone helping with validation) need to be well connected in the web before the incident starts. We can't "get connected" after the incident, as that's far too prone to poisoning. We had this scenario, which drbrain investigated during our pruning of invalidated gems. We had a list of gems that were actually signed, but other than zenspiders pubkey, none of us had pubkeys for them. In a web of trust scenario, this can easily be repeated, if we're (for better or worse) in the validation hot-seat, and we're not very well connected in the trust web, then we can't validate. What do we do then? Pull everything?

I really don't like the general concept of a volunteer run CA, because volunteer groups are so unstable, and central CAs are so dangerous as a single point of failure. This unfortunately doesn't make other solutions less prone to the same problems, and so we have to consider a lot of other factors that seem be being missed as we loop on this discussion each time. Just like with backup systems, it's not just how technically cool your backup is, it's about the processes that you follow, and most importantly your ability to recover.

The fact is, I've met Eric, Evan, Nick and many other folks around the Rubygems team. I haven't met Mark, and I hope to, but I replicated and validated his work for both that reason, and because we needed to do two passes over that first validation that we did (searching for additional cases of abusive yaml). I certainly haven't met most gem authors, and most of you haven't met me. You're going to continue to trust Rack releases (which are pgp signed already, look at the tags) but not one person in the community has ever asked for my PGP fingerprint less exchanged cryptographic trust. This data is real, today, and would please implore you to consider it as meaningful - it is how communities act in the long run. This caremad will be much reduced in a week, and at that point people don't want to have a lot of work to do in order to breed trust.

raggi commented Feb 2, 2013

Raz: A central CA doesn't help prevent attacks, it provides vectors for recovery. Any of the models can be poisoned, the question is how fast and reliably you can propagate invalidation of the poisoned data.

The other thing is if we want this to help at all with anything like what happened in the last week, then folks like drbrain, mark, evan, qrush, and myself (anyone helping with validation) need to be well connected in the web before the incident starts. We can't "get connected" after the incident, as that's far too prone to poisoning. We had this scenario, which drbrain investigated during our pruning of invalidated gems. We had a list of gems that were actually signed, but other than zenspiders pubkey, none of us had pubkeys for them. In a web of trust scenario, this can easily be repeated, if we're (for better or worse) in the validation hot-seat, and we're not very well connected in the trust web, then we can't validate. What do we do then? Pull everything?

I really don't like the general concept of a volunteer run CA, because volunteer groups are so unstable, and central CAs are so dangerous as a single point of failure. This unfortunately doesn't make other solutions less prone to the same problems, and so we have to consider a lot of other factors that seem be being missed as we loop on this discussion each time. Just like with backup systems, it's not just how technically cool your backup is, it's about the processes that you follow, and most importantly your ability to recover.

The fact is, I've met Eric, Evan, Nick and many other folks around the Rubygems team. I haven't met Mark, and I hope to, but I replicated and validated his work for both that reason, and because we needed to do two passes over that first validation that we did (searching for additional cases of abusive yaml). I certainly haven't met most gem authors, and most of you haven't met me. You're going to continue to trust Rack releases (which are pgp signed already, look at the tags) but not one person in the community has ever asked for my PGP fingerprint less exchanged cryptographic trust. This data is real, today, and would please implore you to consider it as meaningful - it is how communities act in the long run. This caremad will be much reduced in a week, and at that point people don't want to have a lot of work to do in order to breed trust.

@m-o-e

This comment has been minimized.

Show comment Hide comment
@m-o-e

m-o-e Feb 2, 2013

@raggi I don't understand the vectors for recovery that the central CA adds.

In my view a fully automatic CA signature simply doesn't convey any useful information. When rubygems.org is compromised and you verify that all gems still match their signature (chained to the root) then that still doesn't tell you whether a gem-file has been replaced in the meantime - with a new, properly signed gem, just signed with a different key.

If you want to detect the latter you must store a tuple like (timestamp, gem_id, key_id) somewhere in order to verify continuity. If you are going to do that (as outlined with the "chain history server" in the proposal) then it becomes rather irrelevant whether key_id was initially generated by an automated CA or by the user himself.

As said, maybe I'm missing something, but I still don't see a scenario where the CA adds a tangible benefit (in terms of recovery or otherwise). As long as the CA hands out keys in a fully automatic fashion requiring nothing but an e-mail address, where is the difference between doing that and just having everyone generate their own keys?

Can you provide a scenario?

m-o-e commented Feb 2, 2013

@raggi I don't understand the vectors for recovery that the central CA adds.

In my view a fully automatic CA signature simply doesn't convey any useful information. When rubygems.org is compromised and you verify that all gems still match their signature (chained to the root) then that still doesn't tell you whether a gem-file has been replaced in the meantime - with a new, properly signed gem, just signed with a different key.

If you want to detect the latter you must store a tuple like (timestamp, gem_id, key_id) somewhere in order to verify continuity. If you are going to do that (as outlined with the "chain history server" in the proposal) then it becomes rather irrelevant whether key_id was initially generated by an automated CA or by the user himself.

As said, maybe I'm missing something, but I still don't see a scenario where the CA adds a tangible benefit (in terms of recovery or otherwise). As long as the CA hands out keys in a fully automatic fashion requiring nothing but an e-mail address, where is the difference between doing that and just having everyone generate their own keys?

Can you provide a scenario?

@danielknell

This comment has been minimized.

Show comment Hide comment
@danielknell

danielknell Feb 2, 2013

an alternative proposal #4

an alternative proposal #4

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

@Raz - The purpose of the root cert provider is:

  1. It provides an email verification mechanism independent of the Rubygems.org platform. This serves the dual purpose of a) letting rubygems ensure that the person uploading gems to the dhh@37signals.com account owns that email account, and b) allows end users to be confident of the same because of their trust in the validity of the rubygems certificate. Signing arbitrary keys does not accomplish this.
  2. It provides a much needed single point of trust, which is critical for adoption by end users. In order to establish trust, you have to trust something at some point. Trusting the rubygems root cert, and then tying all gem signing certs to the root cert in a chain of trust allows end users to trust the entire rubygems repository by trusting the rubygems public key. If no root cert existed, then users would be stuck having to trust one key per gem. For any signing system to work, it has to be as easy as importing a repository GPG key into your distro's package manager (and we could certainly use GPG for it, but that's been shot down by the Rubygems core team already). It should be obvious that this is likewise a single point of failure, which is a feature; if I see a new post on HN that rubygems.org is compromised, I can immediately untrust the rubygems.org certificate, and all gems sourced to rubygems.org become unsafe as far as my system is concerned. This is very valuable.

WoT systems have the chicken-and-egg problem, are difficult for new people to integrate into (which is a feature of WoT, and is an antifeature for Rubygems' purposes, IMO), and are subject to a host of practical concerns which are well-enumerated by PGP's decades-long struggle for any measure of mainstream visibility, let alone adoption.

cheald commented Feb 2, 2013

@Raz - The purpose of the root cert provider is:

  1. It provides an email verification mechanism independent of the Rubygems.org platform. This serves the dual purpose of a) letting rubygems ensure that the person uploading gems to the dhh@37signals.com account owns that email account, and b) allows end users to be confident of the same because of their trust in the validity of the rubygems certificate. Signing arbitrary keys does not accomplish this.
  2. It provides a much needed single point of trust, which is critical for adoption by end users. In order to establish trust, you have to trust something at some point. Trusting the rubygems root cert, and then tying all gem signing certs to the root cert in a chain of trust allows end users to trust the entire rubygems repository by trusting the rubygems public key. If no root cert existed, then users would be stuck having to trust one key per gem. For any signing system to work, it has to be as easy as importing a repository GPG key into your distro's package manager (and we could certainly use GPG for it, but that's been shot down by the Rubygems core team already). It should be obvious that this is likewise a single point of failure, which is a feature; if I see a new post on HN that rubygems.org is compromised, I can immediately untrust the rubygems.org certificate, and all gems sourced to rubygems.org become unsafe as far as my system is concerned. This is very valuable.

WoT systems have the chicken-and-egg problem, are difficult for new people to integrate into (which is a feature of WoT, and is an antifeature for Rubygems' purposes, IMO), and are subject to a host of practical concerns which are well-enumerated by PGP's decades-long struggle for any measure of mainstream visibility, let alone adoption.

@dstufft

This comment has been minimized.

Show comment Hide comment
@dstufft

dstufft Feb 2, 2013

  1. What protects someone from attacking RubyGems.org and adding a new email address to the account and getting a signature for that email address?

  2. Single point of trust is not needed. If your single point of trust cannot verify the authenticity then your single point of trust is lying to it's users.

WoT systems are not all that great here either for the reasons you mentioned.

dstufft commented Feb 2, 2013

  1. What protects someone from attacking RubyGems.org and adding a new email address to the account and getting a signature for that email address?

  2. Single point of trust is not needed. If your single point of trust cannot verify the authenticity then your single point of trust is lying to it's users.

WoT systems are not all that great here either for the reasons you mentioned.

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

  1. Nothing, for pristine gems. For established gems, the cert chain history prevents that. Adding or removing a cert from the cert chain for a known gem would require the signature of a user's cert already authorized on the gem.

  2. Single point of trust is critical for adoption. I'm going to adamantly argue that any system which is sufficiently complex will die on the vine because your average developer doesn't care enough to figure out how to use it. The single point of trust can verify identity; it doesn't attempt to verify the more semantic aspects of project authorization. I have never contended that this system would even begin to approach that, and such a system would need humans involved performing manual auditing to even begin to approach usable.

cheald commented Feb 2, 2013

  1. Nothing, for pristine gems. For established gems, the cert chain history prevents that. Adding or removing a cert from the cert chain for a known gem would require the signature of a user's cert already authorized on the gem.

  2. Single point of trust is critical for adoption. I'm going to adamantly argue that any system which is sufficiently complex will die on the vine because your average developer doesn't care enough to figure out how to use it. The single point of trust can verify identity; it doesn't attempt to verify the more semantic aspects of project authorization. I have never contended that this system would even begin to approach that, and such a system would need humans involved performing manual auditing to even begin to approach usable.

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

  1. The primary purpose of email verification is to a) provide distribution platforms with a secondary correlation measure, to protect against hijacked accounts, and b) to allow users to check the email on the signing cert if they ever had cause to. For less popular gems, this is obviously not valuable, but for more popular gems, it's a guard against social engineering.
  2. This entire strategy does nothing to ensure gem trustworthiness. All it does is ensure gem authenticity, which is to say that gem Foo has been signed by a consistent and authorized list of users, and I can walk the trust chain back to a root that I inherently trust. Trustworthiness is an entirely different problem which would likely require manual human oversight to solve. I can still upload rm-rf-0.1.gem to Rubygems, which will wipe your hard drive when you install it, but you'll know that when you upgrade to rm-rf-0.1.1, that it was provided by the same author. By being able to validate the chain of trust back to a trusted certificate (rubygems), I can ensure that only people authorized by the person who initially uploaded the gem to Rubygems have made changes.

This does not, and cannot solve the issue of people ripping a project, repackaging it, signing it, and uploading it as a new project (perhaps with some malicious payload). That is not the problem that I am trying to solve here.

cheald commented Feb 2, 2013

  1. The primary purpose of email verification is to a) provide distribution platforms with a secondary correlation measure, to protect against hijacked accounts, and b) to allow users to check the email on the signing cert if they ever had cause to. For less popular gems, this is obviously not valuable, but for more popular gems, it's a guard against social engineering.
  2. This entire strategy does nothing to ensure gem trustworthiness. All it does is ensure gem authenticity, which is to say that gem Foo has been signed by a consistent and authorized list of users, and I can walk the trust chain back to a root that I inherently trust. Trustworthiness is an entirely different problem which would likely require manual human oversight to solve. I can still upload rm-rf-0.1.gem to Rubygems, which will wipe your hard drive when you install it, but you'll know that when you upgrade to rm-rf-0.1.1, that it was provided by the same author. By being able to validate the chain of trust back to a trusted certificate (rubygems), I can ensure that only people authorized by the person who initially uploaded the gem to Rubygems have made changes.

This does not, and cannot solve the issue of people ripping a project, repackaging it, signing it, and uploading it as a new project (perhaps with some malicious payload). That is not the problem that I am trying to solve here.

@m-o-e

This comment has been minimized.

Show comment Hide comment
@m-o-e

m-o-e Feb 2, 2013

@cheald

re: 2.

But that's not what the CA does.

As long as it hands out keys to anyone with an e-mail address, and validation silently accepts any key signed
by the CA, I will not be notified when 'gem install rails' suddenly updates to a gem signed by dhh@37signals.RU.

It would be exceedingly dangerous to silently trust the CA signature here.

The measures that we inevitably need on top (continuity) operate regardless of whether there is a
central CA or not. I would really like to hear a specific problem or situation that the CA helps with.
Not a generic phrase like "walk the chain back to a root we trust" because we have no reason to trust that root.

The root, as proposed, will give its blessing to anyone with an e-mail address. How are we supposed to trust it?

m-o-e commented Feb 2, 2013

@cheald

re: 2.

But that's not what the CA does.

As long as it hands out keys to anyone with an e-mail address, and validation silently accepts any key signed
by the CA, I will not be notified when 'gem install rails' suddenly updates to a gem signed by dhh@37signals.RU.

It would be exceedingly dangerous to silently trust the CA signature here.

The measures that we inevitably need on top (continuity) operate regardless of whether there is a
central CA or not. I would really like to hear a specific problem or situation that the CA helps with.
Not a generic phrase like "walk the chain back to a root we trust" because we have no reason to trust that root.

The root, as proposed, will give its blessing to anyone with an e-mail address. How are we supposed to trust it?

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 2, 2013

That's what the trust history is for. Unless dhh@37signals.com authorized the dhh@37signals.ru key, rubygems.org would fail to accept it (presuming it is not compromised), and if it did, the local rubygems install would refuse to install it.

cheald commented Feb 2, 2013

That's what the trust history is for. Unless dhh@37signals.com authorized the dhh@37signals.ru key, rubygems.org would fail to accept it (presuming it is not compromised), and if it did, the local rubygems install would refuse to install it.

@m-o-e

This comment has been minimized.

Show comment Hide comment
@m-o-e

m-o-e Feb 2, 2013

@cheald Agreed. And finally we have arrived at the missing link. :)

Everything you have argued for so far is covered by the "chain history server", which I agree violently with.
But what is the justification for the CA?

m-o-e commented Feb 2, 2013

@cheald Agreed. And finally we have arrived at the missing link. :)

Everything you have argued for so far is covered by the "chain history server", which I agree violently with.
But what is the justification for the CA?

@mattconnolly

This comment has been minimized.

Show comment Hide comment
@mattconnolly

mattconnolly Feb 3, 2013

Doesn't a root CA also provide for a revocation by managing the chain of certificates? Wouldn't that also be a good place for the chain history? i.e.: the CA has approved a gem's new owner X from previous owner Y.

There could be multiple certificate authorities, just like Verisign, Thawte, etc have for issuing HTTPS server certificates. But this shifts the trust problem to another place, with probably more attack vectors.

Doesn't a root CA also provide for a revocation by managing the chain of certificates? Wouldn't that also be a good place for the chain history? i.e.: the CA has approved a gem's new owner X from previous owner Y.

There could be multiple certificate authorities, just like Verisign, Thawte, etc have for issuing HTTPS server certificates. But this shifts the trust problem to another place, with probably more attack vectors.

@matt-glover

This comment has been minimized.

Show comment Hide comment
@matt-glover

matt-glover Feb 3, 2013

Wouldn't that also be a good place for the chain history? i.e.: the CA has approved a gem's new owner X from previous owner Y.

Perhaps I misunderstood the spirit and intent of the "chain history" but I read it like an append-only timeline server that tracks certificate grants over time. If that is the case it should probably remain separate otherwise a compromise of the CA likely invalidates that cert history.

More info on timeline servers can be found in an EFF proposal to mitigate some of the problems in the existing CA structure used on the web today.

Wouldn't that also be a good place for the chain history? i.e.: the CA has approved a gem's new owner X from previous owner Y.

Perhaps I misunderstood the spirit and intent of the "chain history" but I read it like an append-only timeline server that tracks certificate grants over time. If that is the case it should probably remain separate otherwise a compromise of the CA likely invalidates that cert history.

More info on timeline servers can be found in an EFF proposal to mitigate some of the problems in the existing CA structure used on the web today.

@Geal

This comment has been minimized.

Show comment Hide comment
@Geal

Geal Feb 4, 2013

Let's consider for a moment the point of view of the "dumb user" (because most people feel dumb and helpless in front of cryptography).

Dumb user n°1: the amnesia

I just forgot the password for my key, or erased the private key's file without backing it up. How can I get back the access to my gem?

Yes, it is dumb, but you woul be surprised how common it is with web CAs. In that case, they just regenerate a certificate with the user's new key. But here, with the certificate chain history, I am doomed, because the clients will not accept the new key.

Dumb user n°2: I trust computers too much

I am a dumb user but I am security conscious, so I keep my key close to my heart. But I like the convenience of automated systems, so my gems are generated by my CI server. Once all the tests pass for a new version, I download the generated gem, sign it with my key and upload it.

Later, I find out that my CI server has been compromised and that a number of gems were backdoored. How can I revoke the backdoored gems? I don't want to revoke my key, just revoke one signature. This is not possible with this system, because it validates identity, not code.

BTW, that problem would also apply If I found out that I was dumb enough to accept a pull request containing a well hidden backdoor. We all make errors, but we should be able to recover from them.

It is crucial to consider the human side: you can have the best cryptographic architecture, but never at the expense of usability, or people won't use it.

Geal commented Feb 4, 2013

Let's consider for a moment the point of view of the "dumb user" (because most people feel dumb and helpless in front of cryptography).

Dumb user n°1: the amnesia

I just forgot the password for my key, or erased the private key's file without backing it up. How can I get back the access to my gem?

Yes, it is dumb, but you woul be surprised how common it is with web CAs. In that case, they just regenerate a certificate with the user's new key. But here, with the certificate chain history, I am doomed, because the clients will not accept the new key.

Dumb user n°2: I trust computers too much

I am a dumb user but I am security conscious, so I keep my key close to my heart. But I like the convenience of automated systems, so my gems are generated by my CI server. Once all the tests pass for a new version, I download the generated gem, sign it with my key and upload it.

Later, I find out that my CI server has been compromised and that a number of gems were backdoored. How can I revoke the backdoored gems? I don't want to revoke my key, just revoke one signature. This is not possible with this system, because it validates identity, not code.

BTW, that problem would also apply If I found out that I was dumb enough to accept a pull request containing a well hidden backdoor. We all make errors, but we should be able to recover from them.

It is crucial to consider the human side: you can have the best cryptographic architecture, but never at the expense of usability, or people won't use it.

@cheald

This comment has been minimized.

Show comment Hide comment
@cheald

cheald Feb 7, 2013

@Geal --

The first point is basically "the system is too secure, lose the keys and you can't get back in", which is a good point to start from, IMO. I absolutely believe that it could be a common problem, though. The quickest way I can think to fix this is to allow the root of the chain of trust to authorize a cert change on a gem, which would be used by the Rubygems volunteers to manually verify an author's identity, audit the change, and decide to grant an override by signing the cert change request with the root cert.

This, of course, means that if the root cert is compromised, the entire system is even more vulnerable. I don't particularly like this solution.

A "dumber" solution would be something like "welp, that namespace is no longer accessible by anyone because they keys are lost. Name the next version of the gem something else." - this is messy, and potentially opens up some social engineering attacks, but it doesn't require that volunteers maintain the override process.

As to the second point, I think that's tricky for the same reasons that yanks are currently tricky; the code the manages the CRL on the client could become signature-aware, so that it could pull a list of revoked namespace+versions, with each revocation signed by the project owner, and it could then take some appropriate action on the client (notification, uninstalls, whatever). It shouldn't be too hard to piggyback that kind of revocation mechanism on top of any broader certificate revocation system, unless I'm not thinking it through clearly.

cheald commented Feb 7, 2013

@Geal --

The first point is basically "the system is too secure, lose the keys and you can't get back in", which is a good point to start from, IMO. I absolutely believe that it could be a common problem, though. The quickest way I can think to fix this is to allow the root of the chain of trust to authorize a cert change on a gem, which would be used by the Rubygems volunteers to manually verify an author's identity, audit the change, and decide to grant an override by signing the cert change request with the root cert.

This, of course, means that if the root cert is compromised, the entire system is even more vulnerable. I don't particularly like this solution.

A "dumber" solution would be something like "welp, that namespace is no longer accessible by anyone because they keys are lost. Name the next version of the gem something else." - this is messy, and potentially opens up some social engineering attacks, but it doesn't require that volunteers maintain the override process.

As to the second point, I think that's tricky for the same reasons that yanks are currently tricky; the code the manages the CRL on the client could become signature-aware, so that it could pull a list of revoked namespace+versions, with each revocation signed by the project owner, and it could then take some appropriate action on the client (notification, uninstalls, whatever). It shouldn't be too hard to piggyback that kind of revocation mechanism on top of any broader certificate revocation system, unless I'm not thinking it through clearly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment