You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In year 2020, I believe that we can pretty much assume that any developer's machine running VM software also has a SSD disk(s).
Alas modern software development follows standards and best practices from the 1970's. I.e., it seems like neither one of Packer, Vagrant, VirtualBox or VMware will configure the VM instance to better suite and make use of the host machine running it. Instead, the VM will blindly get old-school legacy controllers, drivers and shit assigned. Hey, who cares about performance or the precious time of the poor guy building boxes, right?
Well I do. So research must be spent figuring out - to the best ability of poorly written and shallow-at-best documentation - what benefit does such SSD-tailored configuration provide and how to deploy said configuration.
What has gone into the build already and what has been tried
In the current release of our box (1.0.0), one property has been set for the VMware builder: "disk_adapter_type": "nvme".
I tried (the build crashed, more on this in a bit) setting a similar property for VirtualBox: "hard_drive_interface": "pcie".
But for VirtualBox we must set a shitton of other properties as well because as previously noted, modern software development stipulates an ever increasing work load on the end user instead of making intelligent choices. Hence, I added this flag to the vboxmanage/modifyvm property: "--firmware", "efi".
This changes how the ISO file boots and we must therefore also change the boot command a bit:
In year 2020, I believe that we can pretty much assume that any developer's machine running VM software also has a SSD disk(s).
Alas modern software development follows standards and best practices from the 1970's. I.e., it seems like neither one of Packer, Vagrant, VirtualBox or VMware will configure the VM instance to better suite and make use of the host machine running it. Instead, the VM will blindly get old-school legacy controllers, drivers and shit assigned. Hey, who cares about performance or the precious time of the poor guy building boxes, right?
Well I do. So research must be spent figuring out - to the best ability of poorly written and shallow-at-best documentation - what benefit does such SSD-tailored configuration provide and how to deploy said configuration.
What has gone into the build already and what has been tried
In the current release of our box (1.0.0), one property has been set for the VMware builder:
"disk_adapter_type": "nvme"
.I tried (the build crashed, more on this in a bit) setting a similar property for VirtualBox:
"hard_drive_interface": "pcie"
.But for VirtualBox we must set a shitton of other properties as well because as previously noted, modern software development stipulates an ever increasing work load on the end user instead of making intelligent choices. Hence, I added this flag to the vboxmanage/modifyvm property:
"--firmware", "efi"
.This changes how the ISO file boots and we must therefore also change the boot command a bit:
However, all this crashes the build in the last build step (exporting and packaging), as noted in this reported issue.
I also tried to set yet another random weird property
"iso_interface": "sata"
but this didn't help.Further, I also tried setting even more weird random properties;
"hard_drive_nonrotational": true
and"hard_drive_discard": true
. This did not help.Good luck.
The text was updated successfully, but these errors were encountered: