Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transit to shamir recovery #9316

Closed
sirkubax opened this issue Jun 25, 2020 · 7 comments
Closed

Transit to shamir recovery #9316

sirkubax opened this issue Jun 25, 2020 · 7 comments

Comments

@sirkubax
Copy link

sirkubax commented Jun 25, 2020

Hello
BUG:
Documentation is a bit thin when it comes to transit recovery in general.

How one suppose to perform migration back from transit to shamir, when the 'transit-source' (here 10.10.1.138:8200) is down?

Why the server does not start and wait so I could vault operator unseal -migrate, when the the seal "transit" is disabled?

/usr/local/bin/vault server -recovery -config=/etc/vault.d/vault_main.hcl -log-level=debug

Error parsing Seal configuration: Put http://10.10.1.138:8200/v1/transit/encrypt/autounseal: dial tcp 10.10.1.138:8200: connect: connection refused
seal "transit" {
  address = "http://10.10.1.138:8200"
  disable_renewal = "false"
  key_name = "autounseal"
  mount_path = "transit/"
  token = "s.xxxxxx"
  disabled = "true"
}

What is more, I though I could fake it with anything pretending to listen on that port (like nc -l 127.0.0.1 8200)
or fake it with an fresh instance of vault - just to let vault server boot (should not matter since it is disabled right? :D )

Depending on weather I pass token from original transit source (10.10.1.138:8200) or from new temporary service (10.10.1.9:8400) I get either permission denied, or 'proper disabled' message (in that case, manual unseal failed, I guess keys should match somehow with former transit??)

Example:
Original token from 10.10.1.138:8200, while (disabled) seal transit points already to 10.10.1.9:8400 - cannot even try to migrate

seal "transit" {
  address = "http://10.10.1.9:8400"
  disable_renewal = "false"
  key_name = "autounseal"
  mount_path = "transit/"
  token = "KEY FROM 10.10.1.138:8200"
  disabled  = "true"
}
/usr/local/bin/vault server   -config=/etc/vault.d/vault_main.hcl -log-level=trace
Error parsing Seal configuration: Error making API request.

URL: PUT http://10.10.1.9:8400/v1/transit/encrypt/autounseal
Code: 403. Errors:

* permission denied

Proper token of vault 10.10.1.9:8400 - fails to vault operator unseal -migrate

seal "transit" {
  address = "http://10.10.1.9:8400"
  disable_renewal = "false"
  key_name = "autounseal"
  mount_path = "transit/"
  token = "KEY FROM 10.10.1.9:8400"
  disabled  = "true"
}
==> Vault server configuration:

               Seal Type: transit
         Transit Address: http://10.10.1.9:8400
        Transit Key Name: autounseal
      Transit Mount Path: transit/
             Api Address: http://10.10.1.138:8200
                     Cgo: disabled
         Cluster Address: https://10.10.1.138:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "10.10.1.138:8201", max_request_duration: "1m30s", max_request_siz
e: "33554432", tls: "disabled")
               Log Level: trace
                   Mlock: supported: true, enabled: true
           Recovery Mode: false
                 Storage: raft (HA available)
                 Version: Vault v1.4.2

==> Vault server started! Log data will stream in below:

2020-06-25T09:19:09.257Z [INFO]  proxy environment: http_proxy= https_proxy= no_proxy=
2020-06-25T09:19:09.270Z [DEBUG] storage.cache: creating LRU cache: size=0
2020-06-25T09:19:09.275Z [TRACE] seal-transit: successfully renewed token
2020-06-25T09:19:09.301Z [DEBUG] cluster listener addresses synthesized: cluster_addresses=[10.10.1.138:8201]
2020-06-25T09:19:09.301Z [WARN]  core: entering seal migration mode; Vault will not automatically unseal even if using an autoseal: from
_barrier_type=transit to_barrier_type=shamir

@sirkubax
Copy link
Author

sirkubax commented Jul 1, 2020

6 days and no reply - is that a mystery :) or did noone had this case before?
Let me ask this in a simpler form:

Cluster A (Shamir) --> Cluster B (transit from A)

How to unseal cluster B (or how to perform the recovery), when cluster A is down and cannot be started?

@ncabatoff
Copy link
Collaborator

Hi @sirkubax,

There's no way to do anything with a Vault cluster if its seal is unusable, which is the case for a transit seal whose transit instance is not reachable. I think I saw elsewhere that you're trying to have two clusters mutually unseal one another, which is not a viable configuration. It may work for a little while, but sooner or later will break down and then you'll be stuck.

@sirkubax
Copy link
Author

sirkubax commented Jul 8, 2020

Thanks for the reply @ncabatoff.

I still learn how Vault works. My idea was - would it be possible to have a 'backup' unseal procedure - both shamir+transit for the same credentials set, so if transit is unavailable one could use shamir. A bit like gpg works :)

As I get it now - currently one need to have 'working' unsealed cluster and performs >migration< to another seal and that is a problem.
Could it be possible to 'backup' and later pass that transit 'backup-token' (even via temporary 1-node vault where you could embedded it)?

@ncabatoff
Copy link
Collaborator

Yes, I understand what you want - it's not an uncommon feature request, e.g. #9421 just came in. Unfortunately it's not currently possible in Vault to have "insurance" if your seal mechanism breaks. If you use a cloud KMS as your auto-unseal mechanism it's not really an issue. If you want to run entirely on-prem, and thus want to use transit auto-unseal, I suggest you take regular snapshots of the transit Vault server and have a means to deploy a new instance and restore the snapshot in case something happens to the transit Vault server. Or just use shamir.

@mrkeuz
Copy link

mrkeuz commented Sep 15, 2021

@ncabatoff Do I understand correctly that there is ABSOLUTELY NO a method unseal of autounseal configured server? Even if you have all "recovery_keys" and "root_token"? I mean if "cross" sealed servers down you lost all data?

What if export and backup autounseal key and then setup some "emergency" server with same key for recovery? It will be work, I mean maybe vault somehow check remote server, checking sign, address for make sure that server is "legal"?

@candlerb
Copy link
Contributor

candlerb commented Apr 1, 2022

Do I understand correctly that there is ABSOLUTELY NO a method unseal of autounseal configured server?

That is my understanding of the situation, yes.

To unseal, you need to get access to the master decryption key. With Shamir sealing, the Shamir key is the master key, but also acts as the recovery key. Why? If you know the master key then you could decrypt the database yourself anyway; so Vault may as well issue you a root token.

But with transit sealing, the master key (which encrypts the data) and the recovery key (which instructs Vault to generate a root token) are different.

Taking this to the limit: when you are using a cloud KMS like AWS or GCP, Vault never learns the master key itself, since all the encryption and decryption is done by the KMS. The recovery key therefore must be different to the master key.

I do agree that when you are using transit sealing, it would be extremely helpful to have an option for manual unsealing, as an insurance policy against the upstream Vault being destroyed. This is technically possible, but has not been implemented.

I mean if "cross" sealed servers down you lost all data?

Yes, but I suggest you forget about "cross sealing". Instead, make a tree: have one "parent" vault which does unsealing for one or more "child" vaults. The "parent" vault is dedicated to the unsealing function, so requires very little in the way of resources. If you run it as a tiny instance in AWS or GCP, then it can unseal itself.

@ncabatoff
Copy link
Collaborator

Marking this as a duplicate of #6046.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants