Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

start vm fails with "cannot create ... device.map" #228

Closed
arlenarlenarlen opened this issue Jan 3, 2017 · 13 comments
Closed

start vm fails with "cannot create ... device.map" #228

arlenarlenarlen opened this issue Jan 3, 2017 · 13 comments
Labels

Comments

@arlenarlenarlen
Copy link

Hi there, first time caller, longtime fan.

I'm running Freenas. At some point, my ubuntu server VM stopped working, perhaps after a host reboot, giving me the error, "not a valid guest name".

I've since upgraded Freenas to 9.10.2 (from 9.10.1-U4?) and am no longer receiving the "not a valid guest name" error, rather a new and exciting error.

[root@freenas] ~# iohyve version
iohyve v0.7.7 2016/11/10 I Think I'll Go for a Walk Edition
[root@freenas] ~# iohyve list
Guest     VMM?  Running  rcboot?  Description
ubusrv16  YES   NO       NO       Wed Dec 14 22:30:57 PST 2016
[root@freenas] ~# iohyve getall ubusrv16
Getting ubusrv16 iohyve properties...
bargs          -A_-H_-P
boot           0
con            nmdm4
cpu            1
description    Wed Dec 14 22:30:57 PST 2016
install        no
loader         grub-bhyve
name           ubusrv16
os             debian
persist        1
ram            2G
size           20G
tap            tap3
[root@freenas] ~# iohyve start ubusrv16
Starting ubusrv16... (Takes 15 seconds for FreeBSD guests)
[root@freenas] ~# GRUB Process does not run in background....
If your terminal appears to be hanging, check iohyve console ubusrv16 in second terminal to complete GRUB process...
/usr/local/sbin/iohyve: cannot create /iohyve/ubusrv16/device.map: No such file or directory
/usr/local/sbin/iohyve: cannot create /iohyve/ubusrv16/device.map: No such file or directory

I'm guessing that iohyve is trying to literally write to /iohyve, but it doesn't live at the root directory, it lives at /mnt/bucket/iohyve/ and I'm not sure why it's doing this. I believe other commands, such as iohyve list iohyve getall are "looking" in the right place.

I've also tried creating a new VM, which iohyve does successfully, but then fails with the same error when I go to install the OS.

Looking at the console view gives me an empty grub prompt.

@pr1ntf pr1ntf added the bug label Jan 3, 2017
@pr1ntf
Copy link
Owner

pr1ntf commented Jan 7, 2017

Hi there, first time caller, longtime fan.

Welcome to the show.

I'm guessing that iohyve is trying to literally write to /iohyve

DING DING DING. We have a winner! Caller number #228!

This needs to be called from the mountpoint, and not depend on iohyve living in /iohyve. An "easy" fix I hope to have patched up quick. ™️

@pr1ntf
Copy link
Owner

pr1ntf commented Jan 7, 2017

While looking into this, I recalled that this was an issue in the past, and was previously fixed for FreeNAS users. (Kudos to @EpiJunkie)

Specifically, in the setup function. So, two questions:

  1. How did you setup iohyve?
  2. Do you have the link? (ln -s /mnt/iohyve /iohyve)

Unfortunately, bhyve does not currently support nesting, so I cannot test iohyve in a FreeNAS VM. I'm flying blind over here, and any help is appreciated.

@internationils
Copy link

internationils commented Jan 7, 2017

I have the symlink, and it seems to work fine. I just added the output from iohyve with a "set -x" to #227 as I didn't want to spam this ticket.

  • newlines in grub work, but only after provoking an error message once with "ls hd(1)" (which doesn't exist)
  • hd0 does exist, it was overwritten with the grub> prompt previously
  • hd0 gives a "Device hd0: No known filesystem detected - Total size 16777216 sectors" ...but file reports it as a "web-bhyve/disk0: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200"

PS. I think documenting the "set -x" option for debugging might be a good idea as it shows all the bhyve and zfs stuff happening underneath iohyve...

@pr1ntf
Copy link
Owner

pr1ntf commented Jan 7, 2017

Please spam this issue as #227 is closed.
I am copy and pasting it here.

[root@freenas /mnt/iohyve]# sh -x /usr/local/sbin/iohyve start web-bhyve   
sh -x /usr/local/sbin/iohyve start web-bhyve
+ sh -x /usr/local/sbin/iohyve start web-bhyve
+ PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin
+ grep POPCNT /var/run/dmesg.boot
+ [ -n '  Features2=0x29ee3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,POPCNT,AESNI>' ]
+ grep CPU: /var/run/dmesg.boot
+ grep Intel
+ [ -n 'CPU: Intel(R) Xeon(R) CPU           L5630  @ 2.13GHz (2133.45-MHz K8-class CPU)' ]
+ grep VT-x: /var/run/dmesg.boot
+ grep UG
+ [ -z '  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID' ]
+ [ -e ./lib/ioh-cmd ]
+ [ -e /usr/local/lib/iohyve ]
+ LIB=/usr/local/lib/iohyve
+ . /etc/rc.subr
+ : 69050
+ export RC_PID
+ [ -z '' ]
+ _rc_subr_loaded=YES
+ SYSCTL=/sbin/sysctl
+ SYSCTL_N='/sbin/sysctl -n'
+ SYSCTL_W=/sbin/sysctl
+ ID=/usr/bin/id
+ IDCMD='if [ -x /usr/bin/id ]; then /usr/bin/id -un; fi'
+ PS='/bin/ps -ww'
+ /bin/ps -ww -p 69050 -o jid=
+ JID=0
+ _rc_namevarlist='program chroot chdir env flags fib nice user group groups prepend'
+ _rc_subr_loaded=:
+ . /usr/local/lib/iohyve/ioh-cmd
+ [ ]
+ . /usr/local/lib/iohyve/ioh-setup
+ . /usr/local/lib/iohyve/ioh-zfs
+ . /usr/local/lib/iohyve/ioh-console
+ . /usr/local/lib/iohyve/ioh-iso
+ . /usr/local/lib/iohyve/ioh-firmware
+ . /usr/local/lib/iohyve/ioh-guest
+ . /usr/local/lib/iohyve/ioh-disk
+ . /usr/local/lib/iohyve/ioh-tap
+ [ -z start ]
+ __root_req_cmd start
+ return 0
+ whoami
+ [ root != root ]
+ __parse_cmd start web-bhyve
+ [ 2 -gt 0 ]
+ __guest_start start web-bhyve
+ local name=web-bhyve
+ [ -z web-bhyve ]
+ __guest_exist web-bhyve
+ zfs get -H -s local,received -o value iohyve:name
+ grep '^web-bhyve$'
+ local flag=
+ local pci=
+ local runmode=1
+ zfs get -H -s local,received -o name,value -t filesystem iohyve:name
+ printf '\t'
+ grep '	web-bhyve$'
+ cut -f1
+ local dataset=maindata/iohyve/web-bhyve
+ zfs get -H -o value iohyve:loader maindata/iohyve/web-bhyve
+ local loader=grub-bhyve
+ zfs get -H -o value iohyve:template maindata/iohyve/web-bhyve
+ local template=NO
+ [ NO = YES ]
+ [ grub-bhyve = uefi ]
+ zfs get -H -o value mountpoint maindata/iohyve/web-bhyve
+ local mountpoint=/mnt/iohyve/web-bhyve
+ [ -d /mnt/iohyve/web-bhyve ]
+ pgrep -fx 'bhyve: ioh-web-bhyve'
+ local running=
+ [ -z ]
+ runmode=1
+ echo 'Starting web-bhyve... (Takes 15 seconds for FreeBSD guests)'
Starting web-bhyve... (Takes 15 seconds for FreeBSD guests)
+ __guest_prepare web-bhyve
+ local name=web-bhyve
+ zfs get -H -s local,received -o name,value -t filesystem iohyve:name
+ printf '\t'
+ grep '	web-bhyve$'
+ cut -f1
+ local dataset=maindata/iohyve/web-bhyve
+ __zfs_get_pcidev_conf maindata/iohyve/web-bhyve
+ local pool=maindata/iohyve/web-bhyve
+ local 'oldifs= 	
'
+ IFS='
'
+ zfs get -H -o property,value all maindata/iohyve/web-bhyve
+ grep iohyve:pcidev:
+ sort
+ IFS=' 	
'
+ local pci=
+ zfs get -H -o value iohyve:tap maindata/iohyve/web-bhyve
+ local listtap=tap0
+ echo tap0
+ sed -n 1p
+ tr , '\n'
+ [ tap0 ]
+ [ tap0 != - ]
+ ifconfig -l
+ tr ' ' '\n'
+ grep -F -w tap0
+ local tapif=tap0
+ [ -z tap0 ]
+ zfs get -H -o value iohyve:mac_tap0 maindata/iohyve/web-bhyve
+ local mac=-
+ [ - = - ]
+ pci=' virtio-net,tap0'
+ pci='ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0  virtio-net,tap0'
+ pci='hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0  virtio-net,tap0'
+ echo hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 virtio-net,tap0
+ pci='hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 virtio-net,tap0'
+ __guest_boot web-bhyve 1 'hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 virtio-net,tap0'
+ local name=web-bhyve
+ [ -z web-bhyve ]
+ __guest_exist web-bhyve
+ zfs get -H -s local,received -o value iohyve:name
+ grep '^web-bhyve$'
+ local runmode=1
+ local 'pci=hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 virtio-net,tap0'
+ zfs get -H -s local,received -o name,value -t filesystem iohyve:name
+ printf '\t'
+ grep '	web-bhyve$'
+ cut -f1
+ local dataset=maindata/iohyve/web-bhyve
+ zfs get -H -o value iohyve:ram maindata/iohyve/web-bhyve
+ local ram=4096M
+ zfs get -H -o value iohyve:con maindata/iohyve/web-bhyve
+ local con=nmdm0
+ zfs get -H -o value iohyve:cpu maindata/iohyve/web-bhyve
+ local cpu=2
+ zfs get -H -o value iohyve:persist maindata/iohyve/web-bhyve
+ local persist=1
+ zfs get -H -o value iohyve:bargs maindata/iohyve/web-bhyve
+ local bargexist=-A_-H_-P
+ echo -A_-H_-P
+ sed -e 's/_/ /g'
+ local 'bargs=-A -H -P'
+ zfs set iohyve:install=no maindata/iohyve/web-bhyve
+ __guest_get_bhyve_cmd 'hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 virtio-net,tap0'
+ local 'devices=hostbridge lpc ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 virtio-net,tap0'
+ local pci_slot_count=0
+ echo '-s 0,hostbridge'
+ pci_slot_count=1
+ echo '-s 1,lpc'
+ pci_slot_count=2
+ echo '-s 2,ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0'
+ pci_slot_count=3
+ echo '-s 3,virtio-net,tap0'
+ pci_slot_count=4
+ local 'pci_args=-s 0,hostbridge
-s 1,lpc
-s 2,ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0
-s 3,virtio-net,tap0'
+ local runstate=1
+ exit
+ [ 1 = 1 ]
+ __guest_load web-bhyve /dev/zvol/maindata/iohyve/web-bhyve/disk0
+ local name=web-bhyve
+ local media=/dev/zvol/maindata/iohyve/web-bhyve/disk0
+ [ -z /dev/zvol/maindata/iohyve/web-bhyve/disk0 ]
+ __guest_exist web-bhyve
[root@freenas /mnt/iohyve]# + zfs get -H -s local,received -o value iohyve:name
+ grep '^web-bhyve$'
+ zfs get -H -t volume -o name,value iohyve:name
+ printf 'disk0\t'
+ grep 'disk0	web-bhyve$'
+ cut -f1
+ local disk=maindata/iohyve/web-bhyve/disk0
+ zfs get -H -o name,value -s local,received -t filesystem iohyve:name
+ printf '\t'
+ grep '	web-bhyve$'
+ cut -f1
+ local dataset=maindata/iohyve/web-bhyve
+ zfs get -H -o value iohyve:ram maindata/iohyve/web-bhyve
+ local ram=4096M
+ zfs get -H -o value iohyve:con maindata/iohyve/web-bhyve
+ local con=nmdm0
+ zfs get -H -o value iohyve:loader maindata/iohyve/web-bhyve
+ local loader=grub-bhyve
+ zfs get -H -o value iohyve:install maindata/iohyve/web-bhyve
+ local install=no
+ zfs get -H -o value iohyve:os maindata/iohyve/web-bhyve
+ local os=debian8
+ zfs get -H -o value iohyve:autogrub maindata/iohyve/web-bhyve
+ local autogrub=-
+ zfs get -H -o value iohyve:bargs maindata/iohyve/web-bhyve
+ local bargexist=-A_-H_-P
+ echo -A_-H_-P
+ sed -e 's/_/ /g'
+ local 'bargs=-A -H -P'
+ local test_for_wire_memory=-S
+ local wire_memory=
+ [ grub-bhyve = grub-bhyve ]
+ echo 'GRUB Process does not run in background....'
GRUB Process does not run in background....
+ echo 'If your terminal appears to be hanging, check iohyve console web-bhyve in second terminal to complete GRUB process...'
If your terminal appears to be hanging, check iohyve console web-bhyve in second terminal to complete GRUB process...
+ [ no = yes ]
+ [ no = no ]
+ [ debian8 = openbsd60 ]
+ [ debian8 = openbsd59 ]
+ [ debian8 = openbsd58 ]
+ [ debian8 = openbsd57 ]
+ [ debian8 = netbsd ]
+ [ debian8 = debian ]
+ [ debian8 = d8lvm ]
+ [ debian8 = centos6 ]
+ [ debian8 = centos7 ]
+ [ debian8 = custom ]
+ printf '\(hd0\)\ /dev/zvol/maindata/iohyve/web-bhyve/disk0\n'
+ printf '\(cd0\)\ /dev/zvol/maindata/iohyve/web-bhyve/disk0\n'
+ grub-bhyve -m /iohyve/web-bhyve/device.map -r hd0,msdos1 -c /dev/nmdm0A -M 4096M ioh-web-bhyve

[root@freenas /mnt/iohyve]# 
grub> ls
(hd0) (cd0) (host)
grub> ls (cd0)
Device cd0: No known filesystem detected - Total size 16777216 sectors
grub> ls (host)
Device host: Filesystem type hostfs - Total size 0 sectors
grub> ls (hd0)
Device hd0: No known filesystem detected - Total size 16777216 sectors
grub> exit
Killed
[nils@freenas /mnt/iohyve]$ sudo file web-bhyve/disk0
web-bhyve/disk0: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200

...after the kill, in the main iohyve start terminal:

+ local vmpid=69618
+ wait 69618
+ bhyve -c 2 -A -H -P -m 4096M -s 0,hostbridge -s 1,lpc -s 2,ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 -s 3,virtio-net,tap0 -lcom1,/dev/nmdm0A ioh-web-bhyve
vm exit[0]
	reason		VMX
	rip		0x0000000000000000
	inst_length	0
	status		0
	exit_reason	33
	qualification	0x0000000000000000
	inst_type		0
	inst_error		0
+ vmrc=134
+ sleep 5
+ [ 1 == 0 ]
+ [ 134 == 1 ]
+ zfs get -H -o value iohyve:persist maindata/iohyve/web-bhyve
+ [ 1 != 1 ]
+ [ 1 = 1 ]
+ __guest_load web-bhyve /dev/zvol/maindata/iohyve/web-bhyve/disk0
+ local name=web-bhyve
+ local media=/dev/zvol/maindata/iohyve/web-bhyve/disk0
+ [ -z /dev/zvol/maindata/iohyve/web-bhyve/disk0 ]
+ __guest_exist web-bhyve
+ zfs get -H -s local,received -o value iohyve:name
+ grep '^web-bhyve$'
+ zfs get -H -t volume -o name,value iohyve:name
+ printf 'disk0\t'
+ grep 'disk0	web-bhyve$'
+ cut -f1
+ local disk=maindata/iohyve/web-bhyve/disk0
+ zfs get -H -o name,value -s local,received -t filesystem iohyve:name
+ printf '\t'
+ grep '	web-bhyve$'
+ cut -f1
+ local dataset=maindata/iohyve/web-bhyve
+ zfs get -H -o value iohyve:ram maindata/iohyve/web-bhyve
+ local ram=4096M
+ zfs get -H -o value iohyve:con maindata/iohyve/web-bhyve
+ local con=nmdm0
+ zfs get -H -o value iohyve:loader maindata/iohyve/web-bhyve
+ local loader=grub-bhyve
+ zfs get -H -o value iohyve:install maindata/iohyve/web-bhyve
+ local install=no
+ zfs get -H -o value iohyve:os maindata/iohyve/web-bhyve
+ local os=debian8
+ zfs get -H -o value iohyve:autogrub maindata/iohyve/web-bhyve
+ local autogrub=-
+ zfs get -H -o value iohyve:bargs maindata/iohyve/web-bhyve
+ local bargexist=-A_-H_-P
+ echo -A_-H_-P
+ sed -e 's/_/ /g'
+ local 'bargs=-A -H -P'
+ local test_for_wire_memory=-S
+ local wire_memory=
+ [ grub-bhyve = grub-bhyve ]
+ echo 'GRUB Process does not run in background....'
GRUB Process does not run in background....
+ echo 'If your terminal appears to be hanging, check iohyve console web-bhyve in second terminal to complete GRUB process...'
If your terminal appears to be hanging, check iohyve console web-bhyve in second terminal to complete GRUB process...
+ [ no = yes ]
+ [ no = no ]
+ [ debian8 = openbsd60 ]
+ [ debian8 = openbsd59 ]
+ [ debian8 = openbsd58 ]
+ [ debian8 = openbsd57 ]
+ [ debian8 = netbsd ]
+ [ debian8 = debian ]
+ [ debian8 = d8lvm ]
+ [ debian8 = centos6 ]
+ [ debian8 = centos7 ]
+ [ debian8 = custom ]
+ printf '\(hd0\)\ /dev/zvol/maindata/iohyve/web-bhyve/disk0\n'
+ printf '\(cd0\)\ /dev/zvol/maindata/iohyve/web-bhyve/disk0\n'
+ grub-bhyve -m /iohyve/web-bhyve/device.map -r hd0,msdos1 -c /dev/nmdm0A -M 4096M ioh-web-bhyve
Could not reinit VM ioh-web-bhyve
Error in initializing VM
+ local vmpid=69658
+ wait 69658
+ bhyve -c 2 -A -H -P -m 4096M -s 0,hostbridge -s 1,lpc -s 2,ahci-hd,/dev/zvol/maindata/iohyve/web-bhyve/disk0 -s 3,virtio-net,tap0 -lcom1,/dev/nmdm0A ioh-web-bhyve
bhyve: could not activate CPU 0: Device busy
+ vmrc=71
+ sleep 5

@pr1ntf
Copy link
Owner

pr1ntf commented Jan 7, 2017

@internationils It appears you set the os property to debian8 which is confusing iohyve.

If you take a look at the readme and scroll down to where it says "Try out Debian or Ubuntu" we detail how to properly set the os property for Debian guests. If you choose standard partitioning, using os=debian will work, and if you chose to install on LVM, using os=d8lvm should work.

@internationils
Copy link

internationils commented Jan 8, 2017

  • the console newlines start working after a few commands (5-10)... just the first few lines are spit out without newlines for some reason
  • Tried with os=d8lvm, and I don't get a grub. The host isn't LVM, just wanted to see what happens.
  • Trying host=debian puts me in the same spot as before, see below
  • I've tried converting the .dvi with VBoxManage and with qemu-img ... the file sizes are identical, diff says the files differ, and the result with both files is the same
  • I tried dd'ing it and cp -a'ing it... both give the same result for 'file' (see post above) and both behave the same way.
  • Since it's only seeing the hd0 and no partitions, I'm wondering if something is wrong with the disk0 image. The filesystem should be ext3 or ext4 on the image
grub> ls
(hd0) (cd0) (host)
grub> ls (hd0)
Device hd0: No known filesystem detected - Total size 16777216 sectors
[root@freenas /mnt/iohyve]# ll web-bhyve/ 
total 17889741
drwxr-xr-x  2 root  wheel  uarch          5 Jan  5 19:41 ./
drwxr-xr-x  9 root  wheel  uarch          9 Jan  7 16:37 ../
-rw-r--r--  1 root  wheel  uarch         96 Jan  8 10:53 device.map
-rw-------  1 root  wheel  uarch 8589934592 Jan  5 19:11 disk0
[root@freenas /mnt/iohyve]# iohyve getall web-bhyve 
Getting web-bhyve iohyve properties...
bargs          -A_-H_-P
boot           0
con            nmdm0
cpu            2
description    Sat Dec 31 11:20:39 CET 2016
install        no
loader         grub-bhyve
name           web-bhyve
net            igb0
os             debian
persist        1
ram            4096M
size           8589934592
tap            tap0
template       NO
vnc            NO
vnc_h          600
vnc_ip         127.0.0.1
vnc_tablet     NO
vnc_w          800
vnc_wait       NO

@internationils
Copy link

internationils commented Jan 8, 2017

Here's the problem:

  • here's the file in the vbox jail, with the file command being run from inside the jail:
[nils@bsdpim /usr/home/vbox/VM/nas-dpim64]$ file web-bhyve.raw 
web-bhyve.raw: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200; partition 1: ID=0x83, active, starthead 1, startsector 63, 15952482 sectors; partition 2: ID=0x5, starthead 0, startsector 15952545, 819315 sectors, code offset 0x63
  • here's the file command run from the main host, once in the jail and once in the /iohyve directory, both not showing the partition information:
[root@freenas ~]# file /mnt/maindata/jails/bsdpim/usr/home/vbox/VM/nas-dpim64/web-bhyve-QE.raw
/mnt/maindata/jails/bsdpim/usr/home/vbox/VM/nas-dpim64/web-bhyve-QE.raw: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200
[root@freenas /mnt/iohyve/web-bhyve]# file disk0
disk0: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200

...but the shasums match:

821f5a6d98bc0fe81bdf19c2dbd5d70cb4cf990b  web-bhyve.raw
821f5a6d98bc0fe81bdf19c2dbd5d70cb4cf990b  disk0

Is there some ZFS weirdness gong on here? Is 'file' behaving differently? The NAS is at 9.10.2, the jail still at 9.3 (vbox template) ...

Extracting the MBR with dd if=disk0 of=disk0-mbr.bin bs=512 count=1 and looking at it with 'file' gives the same results as for the NAS and the jail above.

Looking at the two raw images on Linux with fdisk:

nils@dnet64:/mnt/nas/backup/tmp$ ll
total 8978640
drwxr-xr-x  2 nils nils          0 Jan  8 15:11 .
drwxr-xr-x 10 nils nils          0 Jan  8 14:59 ..
-rw-r--r--  1 nils nils 8589934592 Jan  8 15:12 disk0
-rw-r--r--  1 nils nils 8589934592 Jan  8 11:27 web-bhyve-QE.raw
nils@dnet64:/mnt/nas/backup/tmp$ sudo fdisk -l ./web-bhyve-QE.raw
Disk ./web-bhyve-QE.raw: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000f139a

Device              Boot    Start      End  Sectors   Size Id Type
./web-bhyve-QE.raw1 *          63 15952544 15952482   7,6G 83 Linux
./web-bhyve-QE.raw2      15952545 16771859   819315 400,1M  5 Extended
./web-bhyve-QE.raw5      15952608 16771859   819252   400M 82 Linux swap / Solar

nils@dnet64:/mnt/nas/backup/tmp$ sudo fdisk -l disk0
Disk disk0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000f139a

Device     Boot    Start      End  Sectors   Size Id Type
disk0p1    *          63 15952544 15952482   7,6G 83 Linux
disk0p2         15952545 16771859   819315 400,1M  5 Extended
disk0p5         15952608 16771859   819252   400M 82 Linux swap / Solaris

@internationils
Copy link

internationils commented Jan 8, 2017

Questions:

  • is there some way to list all of the parameters that iohyve setup sets? zfs get -H -r all maindata | grep iohyve shows all of the paramaters for the VM as it should...
  • is there a way to completely remove all iohyve data to be able to start over / reinitialize?
  • my pool is called maindata. It is under /mnt/maindata. However, the iohyve directory with the ISOs and disk images is directly under /mnt/iohyve... is that intended? is that a problem?
  • zfs lists the iohyve as being under maindata where it should be:
[root@freenas ~]# zfs list | grep iohyve
maindata/iohyve                                             12.5G  2.51T   140K  /mnt/iohyve
maindata/iohyve/Firmware                                     372K  2.51T   128K  /mnt/iohyve/Firmware
maindata/iohyve/ISO                                          372K  2.51T   128K  /mnt/iohyve/ISO
maindata/iohyve/wb                                          12.5G  2.51T  4.26G  /mnt/iohyve/wb
maindata/iohyve/wb/disk0                                    8.25G  2.52T  81.4K  -

I've enlisted the bhyve mailing list for help...
https://lists.freebsd.org/pipermail/freebsd-virtualization/2017-January/005089.html

@pr1ntf
Copy link
Owner

pr1ntf commented Jan 8, 2017

@internationils I apologize, iohyve does not support importing images from other hypervisors. I thought you were trying to install from scratch. This explains the errors a little more.

iohyve setup does a few things depending on what you ask it to do. As far as network, rebooting will clear all of that. You could also destroy the interfaces with ifconfig if you didn't want to reboot.

To completely wipe the iohyve datasets from your box, you can simply recursively destroy the iohyve dataset and it's children. So something like zfs destroy -rR maindata/iohyve

@pr1ntf
Copy link
Owner

pr1ntf commented Jan 8, 2017

@arlenarlenarlen

Please double check to make sure the symbolic link is set up for /iohyve -> /mnt/iohyve as mentioned before.

If it's not there, try running ln -s /mnt/iohyve /iohyve and give it a go.

I'm closing this issue for now, if the original problems still persists, we can reopen it.

@pr1ntf pr1ntf closed this as completed Jan 8, 2017
@arlenarlenarlen
Copy link
Author

arlenarlenarlen commented Jan 8, 2017

@pr1ntf yes, I had the link /iohyve -> /mnt/iohyve but the actual mountpoint is /mnt/bucket/iohyve (bucket being the name of my main pool) so once i fixed the link things booted 🎉 . I don't know how that got bollocksed up, but it could totally be my fault.

@pr1ntf
Copy link
Owner

pr1ntf commented Jan 8, 2017

Thanks for the info! Glad to hear you got it working. 🎉 🌮 🍻

@internationils
Copy link

FYI, the issue is likely with grub-bhyve:
https://lists.freebsd.org/pipermail/freebsd-virtualization/2017-January/005094.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants