New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

imagebuilder fails to query the newly registered AMI #293

Closed
qqshfox opened this Issue Sep 11, 2017 · 3 comments

Comments

Projects
None yet
4 participants
@qqshfox

qqshfox commented Sep 11, 2017

imagebuilder may complain image not found after build and the execution failes. But from the logs ahead the exception, we can find the AMI has been registered actually. It seems that the AMI newly created not available yet despite bootstrap-vz claims so.

Removing volume partitions mapping
Executing: kpartx -ds /dev/xvdf
Detaching the volume
Deleting mountpoint for the bootstrap volume
Creating a snapshot of the EBS volume
Registering the image as an AMI
Deleting the volume
Deleting workspace
Successfully completed bootstrapping
I0911 14:53:04.619685   89132 interface.go:75] Executing command: ["rm" "-rf" "/tmp/imagebuilder-8294145453372472625"]
I0911 14:53:04.657912   89132 interface.go:83] Output was:
I0911 14:53:04.657990   89132 aws.go:532] AWS DescribeImages Filter:Name="k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-11", Owner=self
F0911 14:53:04.827255   89132 main.go:240] image not found after build: "k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-11"
goroutine 1 [running]:
k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog.stacks(0xc420085f00, 0xc4200a4000, 0x79, 0xc7)
	/Users/hanfei/workspace/go/src/k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog/glog.go:769 +0xcf
k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog.(*loggingT).output(0x1b55c20, 0xc400000003, 0xc4200b62c0, 0x1b01b13, 0x7, 0xf0, 0x0)
	/Users/hanfei/workspace/go/src/k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog/glog.go:720 +0x345
k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog.(*loggingT).printf(0x1b55c20, 0x3, 0x177448a, 0x1f, 0xc42012bcb8, 0x1, 0x1)
	/Users/hanfei/workspace/go/src/k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog/glog.go:655 +0x14c
k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog.Fatalf(0x177448a, 0x1f, 0xc42012bcb8, 0x1, 0x1)
	/Users/hanfei/workspace/go/src/k8s.io/kube-deploy/imagebuilder/vendor/github.com/golang/glog/glog.go:1148 +0x67
main.main()
	/Users/hanfei/workspace/go/src/k8s.io/kube-deploy/imagebuilder/main.go:240 +0x1a20

Wait one minute or so, the AMI should be available finally.

$ aws ec2 describe-images --filters Name=name,Values=k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-11
{
    "Images": []
}
$ aws ec2 describe-images --filters Name=name,Values=k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-11
{
    "Images": []
}
$ aws ec2 describe-images --filters Name=name,Values=k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-11
{
    "Images": [
        {
            "VirtualizationType": "hvm",
            "Name": "k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-11",
            "Hypervisor": "xen",
            "SriovNetSupport": "simple",
            "ImageId": "ami-775f8f1a",
            "State": "available",
            ...
            "Description": "Kubernetes 1.7 Base Image - Debian jessie amd64"
        }
    ]
}

qqshfox added a commit to qqshfox/kops that referenced this issue Sep 11, 2017

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 30, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Jan 30, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 2, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Mar 2, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Apr 1, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Apr 1, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment