Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not bootable disk #1

Closed
niksfirefly opened this issue Apr 4, 2017 · 12 comments
Closed

not bootable disk #1

niksfirefly opened this issue Apr 4, 2017 · 12 comments

Comments

@niksfirefly
Copy link

I follow tutorial
https://github.com/OpenNebula/addon-lxdone/blob/master/Setup.md
unfortunately after creating VM with no errors
i get in VNC message
Booting from Hard Disk
Boot failed: not a bootable disk

I tried images create from exported existing LXC container and also
build-img.sh script provided from addon-lxdone
https://github.com/OpenNebula/addon-lxdone/blob/master/image-handling/build-img.sh
what am i forgot?

regards

@dann1
Copy link
Collaborator

dann1 commented Apr 4, 2017

https://github.com/OpenNebula/addon-lxdone/blob/master/image-handling/build-img.sh is intended for creating a base image from scratch, if you have an existing LXC container read https://github.com/OpenNebula/addon-lxdone/blob/master/Image.md there are instructions for that case. Also mount the image created and list the contents inside it

@niksfirefly
Copy link
Author

I tried both ways - none works
First I build lxdone.img with build-img.sh and then on sunstone image creation
uploaded or added via /var/tmp/lxdone.img
Then i created virtual appliance with
https://github.com/OpenNebula/addon-lxdone/blob/master/Image.md
same effect.

No errors on every step from tutorial
https://github.com/OpenNebula/addon-lxdone/blob/master/Setup.md
until dreadful VM not bootable info

I am using latest clone from github
https://github.com/OpenNebula/addon-lxdone

below image from ONE

NAME : lxdone
USER : oneadmin
GROUP : oneadmin
DATASTORE : default
TYPE : OS
REGISTER TIME : 04/03 20:05:25
PERSISTENT : No
SOURCE : /var/lib/one//datastores/1/96955803857eb6e97f71f08b9a6264b1
PATH : /var/tmp/lxdone.img
FSTYPE : raw
SIZE : 600M
STATE : rdy
RUNNING_VMS : 0

PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---

IMAGE TEMPLATE
DEV_PREFIX="vd"
DRIVER="raw"

also ONE template

ID : 7
NAME : lxd
USER : oneadmin
GROUP : oneadmin
REGISTER TIME : 04/03 20:32:56

PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---

TEMPLATE CONTENTS
CONTEXT=[
NETWORK="YES",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
CPU="0.1"
DISK=[
IMAGE="lxdone",
IMAGE_UNAME="oneadmin" ]
GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="VNC" ]
MEMORY="1024"
NIC=[
NETWORK="lxdbr0",
NETWORK_UNAME="oneadmin" ]

@dann1
Copy link
Collaborator

dann1 commented Apr 4, 2017

Let's inspect the contents of the image you created. Try

sudo losetup /dev/loop0  /var/lib/one//datastores/1/96955803857eb6e97f71f08b9a6264b1
mount /dev/loop0 /mnt
ls -lh /mnt

If there were no errors during the execution of build-img.sh I'd expect the image is fine. The below commands are supposed to do that. Also give me the screenshot of the VNC error. I suspect your host is using KVM because that particular output

Booting from Hard Disk
Boot failed: not a bootable disk

I've seen it when using virt-manager

@niksfirefly
Copy link
Author

yes of course
I can mount images

drwx------ 2 root root 16K Apr 3 19:39 lost+found
-rw-r--r-- 1 root root 1.2K Apr 3 19:44 metadata.yaml
drwxr-xr-x 21 root root 4.0K Apr 3 19:44 rootfs
drwxr-xr-x 2 root root 4.0K Apr 3 19:44 templates

@dann1
Copy link
Collaborator

dann1 commented Apr 4, 2017

Try
ls -lh /mnt/rootfs/

@niksfirefly
Copy link
Author

/mnt:
total 28K
drwx------ 2 root root 16K Apr 3 19:39 lost+found
-rw-r--r-- 1 root root 1.2K Apr 3 19:44 metadata.yaml
drwxr-xr-x 21 root root 4.0K Apr 3 19:44 rootfs
drwxr-xr-x 2 root root 4.0K Apr 3 19:44 templates

/mnt/rootfs/:
total 76K
drwxr-xr-x 2 root root 4.0K Apr 3 19:44 bin
drwxr-xr-x 2 root root 4.0K Apr 12 2016 boot
drwxr-xr-x 4 root root 4.0K Apr 3 19:41 dev
drwxr-xr-x 63 root root 4.0K Apr 3 19:44 etc
drwxr-xr-x 2 root root 4.0K Apr 12 2016 home
drwxr-xr-x 11 root root 4.0K Apr 3 19:43 lib
drwxr-xr-x 2 root root 4.0K Apr 3 19:40 lib64
drwxr-xr-x 2 root root 4.0K Apr 3 19:39 media
drwxr-xr-x 2 root root 4.0K Apr 3 19:39 mnt
drwxr-xr-x 2 root root 4.0K Apr 3 19:39 opt
drwxr-xr-x 2 root root 4.0K Apr 12 2016 proc
drwx------ 2 root root 4.0K Apr 3 19:49 root
drwxr-xr-x 6 root root 4.0K Apr 3 19:44 run
drwxr-xr-x 2 root root 4.0K Apr 3 19:44 sbin
drwxr-xr-x 2 root root 4.0K Apr 3 19:39 srv
drwxr-xr-x 2 root root 4.0K Feb 5 2016 sys
drwxrwxrwt 2 root root 4.0K Apr 3 19:44 tmp
drwxr-xr-x 10 root root 4.0K Apr 3 19:39 usr
drwxr-xr-x 11 root root 4.0K Apr 3 19:39 var

/mnt/templates/:
total 12K
-rw-r--r-- 1 root root 39 Apr 3 19:44 hostname.tpl
-rw-r--r-- 1 root root 262 Apr 3 19:44 hosts.tpl
-rw-r--r-- 1 root root 7 Apr 3 19:44 upstart-override.tpl

@niksfirefly
Copy link
Author

and screenshot
opennebula sunstone cloud operations center

@dann1
Copy link
Collaborator

dann1 commented Apr 4, 2017

well it seems the image is OK, and as I suspected you are not using LXD in OpenNebula. Give me a screenshot of the host tab you are using. Take this one as a reference.

screenshot_20170404_122730

@dann1
Copy link
Collaborator

dann1 commented Apr 4, 2017

Read and follow carefully https://github.com/OpenNebula/addon-lxdone/blob/master/Setup.md#42-virtualization-node seems you made some mistake there

@jmdelafe jmdelafe closed this as completed Apr 4, 2017
@niksfirefly
Copy link
Author

Thanks for your suggestions.
Indeed Vms were initialized with default KVM host so I change in Vm template
SCHED_REQUIREMENTS = "ID="4""
where ID is the proper lxd Host definition.
Now Vms are created but without network configuration in it.
I follow exactly
https://github.com/OpenNebula/addon-lxdone/blob/master/Setup.md

I wonder is it normal behaviour and I must create network config manually in every lxd VM?
Why we remove eth0 as is in tutorial ?
lxc profile device remove default eth0

@jmdelafe
Copy link
Contributor

jmdelafe commented Apr 5, 2017

The VMs were initialized in that host because the Opennebula scheduler's decision. You must specify an LXD host (or a cluster) within it's template to avoid such behavior.

We remove eth0 from the default profile because we want to, inside the containers, only interfaces that we attached from it's template. If you skip that step, containers will have an interface that was not declared inside Opennebula. Contextualization, for example, will not work.

@jmdelafe jmdelafe reopened this Apr 5, 2017
@jmdelafe jmdelafe closed this as completed Apr 5, 2017
@dann1
Copy link
Collaborator

dann1 commented Apr 5, 2017

if you leave eth0 in the default lxd profile that network interface won't be controlled by OpenNebula, and that's not what we aim.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants