Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InvalidVolume.NotFound error in DescribeVolumes, but can see volume #92

Open
waynn opened this issue Jun 30, 2015 · 8 comments
Open

InvalidVolume.NotFound error in DescribeVolumes, but can see volume #92

waynn opened this issue Jun 30, 2015 · 8 comments

Comments

@waynn
Copy link

waynn commented Jun 30, 2015

I'm getting an error when trying to run ec2-automate-backup-awscli.sh. I saw #88 which described a similar issue, but I'm using the cron-primer.sh and .aws file and still getting this error. Here's the output I'm getting.

ec2-user:~/scripts/ec2-automate-backup$ ./ec2-automate-backup-awscli.sh -v "vol-07e1a916" -c ./cron-primer.sh

A client error (InvalidVolume.NotFound) occurred when calling the DescribeVolumes operation: The volume 'vol-07e1a916' does not exist.
An error occurred when running ec2-describe-volumes. The error returned is below:
<nothing here>

But when I run aws ec2 describe-volumes, I can see the volume.

ec2-user:~/scripts/ec2-automate-backup$  aws ec2 describe-volumes
{
    "Volumes": [
        {
            "AvailabilityZone": "us-west-2a", 
            "Attachments": [
                {
                    "AttachTime": "2015-02-25T01:34:00.000Z", 
                    "InstanceId": "i-da56b1d7", 
                    "VolumeId": "vol-07e1a916", 
                    "State": "attached", 
                    "DeleteOnTermination": false, 
                    "Device": "/dev/xvda"
                }
            ], 
            "Tags": [
                {
                    "Value": "true", 
                    "Key": "Backup-Daily"
                }
            ], 
            "Encrypted": false, 
            "VolumeType": "gp2", 
            "VolumeId": "vol-07e1a916", 
            "State": "in-use", 
            "Iops": 24, 
            "SnapshotId": "snap-f518b274", 
            "CreateTime": "2015-02-25T01:34:00.281Z", 
            "Size": 8
        }
    ]
}
@paulwakeford
Copy link

Different regions would be my first thought, is the backup script configured to use the default region (us-east-1) and your volume is in us-west-2 ?

@waynn
Copy link
Author

waynn commented Jul 5, 2015

I have region set in my .aws file.

ec2-user:~/scripts/ec2-automate-backup$ cat .aws
[default]
aws_access_key_id = <removed>
aws_secret_access_key = <removed>
region = us-west-2

@paulwakeford
Copy link

Could you humour me and use the -r switch to set your region explicitly?

@waynn
Copy link
Author

waynn commented Jul 6, 2015

Absolutely -- and it worked! Any ideas what I should change in configuration that's causing this, though?

ec2-user:~/scripts/ec2-automate-backup$ ./ec2-automate-backup-awscli.sh -v "vol-07e1a916" -c ./cron-primer.sh -r "us-west-2"

@paulwakeford
Copy link

Lots of possibilities but first I wonder about your AWS CLI setup as your credentials file should be '...in a local file named credentials in a folder named .aws in your home directory' and not in a file called .aws. See https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html.

Maybe re-run aws configure to ensure your config is written out correctly, then ensure cron-primer.sh is pointing to your ~/.aws/credentials file and not to .aws ? And that the user running can access that file.

You can even export the environment variables to be doubly sure. Having said that the code looks for $EC2_REGION and I'm not sure what sets that (maybe it's a holdover from the old EC2 CLI?) - the current AWS CLI uses AWS_DEFAULT_REGION.

@waynn
Copy link
Author

waynn commented Jul 8, 2015

So what's confusing to me is that there's a file called .awsthat ships in the repo: https://github.com/colinbjohnson/aws-missing-tools/blob/master/ec2-automate-backup/Resources/.aws

But when aws configure runs, it doesn't just generate one .aws file. It creates an .aws folder with both config and credentials folders. When I try changing this line

# AWS_CONFIG_FILE required for AWS Command Line Interface tools (f.e. ".aws")
export AWS_CONFIG_FILE=/home/ec2-user/.aws/config

I've tried passing in /home/ec2-user/.aws/config (as shown above), /home/ec2-user/.aws/credentials, and /home/ec2-user/.aws and none of them work. I've also tried passing in my own .aws file with the values set based off of the repo .aws file above, and that doesn't work either.

I've tried re-running aws configure and that doesn't change anything.

@kaburkett
Copy link

There are many others that have posted this as an issue.

Change this in the .sh script:

#if the environment variable $EC2_REGION is not set set to us-east-1
  if [[ -z $EC2_REGION ]]; then
    region="us-east-1"
  else
    region=$EC2_REGION

Obviously, you would want to set the default region to whatever region your volumes are in... and then when you don't declare the -r then it will go to the correct region instead of us-east-1

@AdamKobus
Copy link

I had same issue. Other way around this than the one posted by @kaburkett, is to pass args in order specified by line:
while getopts :s:c:r:v:t:k:pnhu opt; do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants