Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attatched volumes don't get marked as missing after check. #876

Closed
benhamad opened this issue Feb 25, 2018 · 0 comments
Closed

Attatched volumes don't get marked as missing after check. #876

benhamad opened this issue Feb 25, 2018 · 0 comments

Comments

@benhamad
Copy link
Contributor

If a volume get deleted manually, it doesn't get marked as MISSING after check operation. Also if the volume was attached to an instance we get an obscure error when nixops try to attach it again using deploy. This is the minimal expression to reproduce it.

let
  region = "us-east-1";
  zone = "us-east-1d";
in
{
  resources.ec2KeyPairs.my-key-pair =
    { inherit region; };

  resources.ec2SecurityGroups.ssh-security-group = {
    inherit region;
    description = "Security group for NixOps tests";
    rules = [ {
      fromPort = 22;
      toPort = 22;
      sourceIp = "0.0.0.0/0";
    } ];
  };

  machine =
  {resources, lib, ...}:
  {
    deployment.targetEnv = "ec2";
    deployment.ec2 = {
      inherit region zone;
      ami = "ami-40bee63a";
      securityGroups = [ resources.ec2SecurityGroups.ssh-security-group ];
      instanceType = "t1.micro";
      blockDeviceMapping."/dev/sdf".disk = resources.ebsVolumes.data_volume;
      keyPair = resources.ec2KeyPairs.my-key-pair ;
      };

  };

  resources.ebsVolumes.data_volume = {
      inherit region zone;
      size = 1;
      volumeType = "gp2";
  };


}

After deploying the network, delete the volume manually using

aws ec2 delete-volume --volume-id VOLUME_ID --region us-east-1

Don't forget to detach it before deleting

aws ec2 detach-volume --volume-id VOLUME_ID  --force 

If we run a check operation, nixops notice that the volume is missing but it doesn't mark it as such

[nix-shell:~/git/nixops]$ nixops check -d volume_bug 
Machines state:
+---------+--------+-----+-----------+----------+----------------+----------------------------------------+------------------------------------------------------------+
| Name    | Exists | Up  | Reachable | Disks OK | Load avg.      | Units                                  | Notes                                                      |
+---------+--------+-----+-----------+----------+----------------+----------------------------------------+------------------------------------------------------------+
| machine | Yes    | Yes | Yes       | No       | 0.04 0.28 0.21 | proc-sys-fs-binfmt_misc.mount [failed] | volume ‘VOLUME_ID‘ not attached to ‘/dev/xvdf’             |
|         |        |     |           |          |                |                                        | volume ‘VOLUME_ID’ no longer exists                        |
+---------+--------+-----+-----------+----------+----------------+----------------------------------------+------------------------------------------------------------+
Non machines resources state:
+--------------------+--------+
| Name               | Exists |
+--------------------+--------+
| data_volume        | Yes    |
| my-key-pair        | Yes    |
| ssh-security-group | Yes    |
+--------------------+--------+

And since the volume wasn't marked as deleted it won't be created in the next deploy operation. Leading to the following error.


[nix-shell:~/git/nixops]$ nixops deploy -d volume_bug --allow-recreate
machine...........> warning: device ‘/dev/xvdf’ was manually detached!
machine...........> warning: volume ‘VOLUME_ID’ has disappeared; will create an empty volume to replace it
error: 'NoneType' object has no attribute 'status'

Please note that I'm working on this, so please don't create a PR for it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant