HP DL380Gen9 installing on SD Card using UEFI


Recently we purchased a few HP DL380 GEN9 servers to test our installations for ESXi, Windows and Linux.

Normally we install our ESXi environment on a SD Card, which worked fine on the G7 and Gen8 series.

Now the new Gen9 series start default with UEFI boot, tried the normal install, but huh that failed.

Switched to “Legacy Boot” which worked fine, I still wanted to see if we can get it to work with UEFI so started browsing the internet and I found the following posts which already refer to this issue.

The problem seems to be the PXE boot environment which isn’t UEFI Compatible.

Bootloader / PXE

The problem seems to be the bootloader, originally the “Legacy bootloader” is configured default on many PXE boot environments, so we need to prepare them for the future and support UEFI also.

PXE Bootloader

PXE Bootloader

See the article below which explains how it works and what needs to be done on Windows/RedHat PXE boot environments.


VMWare says you cannot use it now according to this article. (It will work when you install and use local disks etc. The only thing that isn’t supported is Networkprovisioning/PXE boot)


You cannot provision EFI hosts with Auto Deploy unless you switch the EFI system to BIOS compatibility mode.

Network boot of VMware ESXi or provisioning with VMware Auto Deploy requires the legacy BIOS firmware and is not available with UEFI.

This probably will probably be fixed in newer versions of the Auto Deploy environment.

SD Card problem?

There also can be problems when using the SD-CARD on a HP DL380 Gen9 which can be solved by 2 solutions

1) ILO Bug, according to this post there has been a ILO bug which causes the server not detecting the SD-CARD


The scope of servers which have this problem could be any HP ProLiant Gen8 or ProLiant Gen9 server with HP Integrated Lights-Out 4 (iLO 4) Firmware Version 1.51 through 2.02 and a Secure Digital (SD) card installed.

This can be solved by an ILO firmware update :


If your firmware is up to date you should get to the next step

2) According to this post you should disable USB3.0, after a little search I found the article below which describes how to turn it off.

Use this option is set the USB 3.0 Mode.

  1. From the System Utilities screen, select System ConfigurationBIOS/Platform Configuration (RBSU)System OptionsUSB OptionsUSB 3.0 Mode and press Enter.

  2. Select a setting and press Enter:

    • Auto (default)—USB 3.0-capable devices operate at USB 2.0 speeds in the pre-boot environment and during boot. When a USB 3.0 capable OS USB driver loads, USB 3.0 devices transition to USB 3.0 speeds. This mode is compatible with operating systems that do not support USB 3.0 while still allowing USB 3.0 devices to operate at USB 3.0 speeds with modern operating systems.

    • Enabled—USB 3.0-capable devices operate at USB 3.0 speeds at all times (including the pre-boot environment) when in UEFI Boot Mode. Do not use this mode with operating systems that do not support USB 3.0. When operating in Legacy BIOS Boot Mode, the USB 3.0 ports do not function in the pre-boot environment and are not bootable.

    • Disabled—USB 3.0-capable devices function at USB 2.0 speeds at all times.

  3. Press F10 to save your selection.

What I found remarkable is the fact that you shouldn’t enable it unless you know for sure the OS supports USB3.0

Reference : http://h17007.www1.hp.com/docs/iss/proliant_uefi/s_Accessing_USB_Options_201310290123.html#s_USB_30_mode

I keep this post up to date if I have more information.


Everything  I tried so far didn’t let me boot from Network in UEFI mode or doesn’t see the SD-Card even with a manual install.

PowerCLI Start and stop VM using get-view


When trying to stop and start a bunch of servers I noticed that using the normal commands it took a while to shutdown all VM’s mostly because the tasks are started after each other. When You select the bunch of VM’s in vCenter, right click and select shutdown it will almost instantly start the requests tasks.

get-vm | shutdown-vmguest -confirm:$false


I have test set of 5 CentOS Virtual machines which are all named with CENTOS in the name.

Using the measure-command command I measured the time it took to run the commands.

Command TotalSeconds
measure-command {(get-vm -name centos*)|Start-VM -Confirm:$false -RunAsync}|select totalseconds|ft -a 3,32
measure-command {(get-vm -name centos*)|Shutdown-VMGuest -Confirm:$false}|select totalseconds|ft -a 6,18

Not too bad you would say using 5 VM’s

But let’s extrapolate this for more machines

Amount VM Start(Sec) Stop(Sec) Start(Min) Stop(Min)
1,00 0,66 1,24 0,01 0,02
50,00 33,20 61,80 0,55 1,03
100,00 66,40 123,60 1,11 2,06
500,00 332,00 618,00 5,53 10,30
1000,00 664,00 1236,00 11,07 20,60

As you can see this is gaining a lot more time.

Now I build 2 commands which do exact the same thing only using the get-view commands.

measure-command {(Get-View -ViewType VirtualMachine -property Name -Filter @{“Name” = “CentOS*”}).ShutdownGuest()} |select totalseconds|ft -a 2,33
measure-command {(Get-View -ViewType VirtualMachine -property Name -Filter @{“Name” = “CentOS*”}).PowerOnVM_Task($null)} |select totalseconds|ft -a 1,64

When we extrapolate this one we see a lot of change

Aantal VM Start(Sec) Stop(Sec) Start(Min) Stop(Min)
1,00 0,2 0,04 0,00 0,00
50,00 10,00 2,00 0,17 0,03
100,00 20,00 4,00 0,33 0,07
500,00 100,00 20,00 1,67 0,33
1000,00 200,00 40,00 3,33 0,67


Starting VM’s from 11 to 3 minutes and stopping from 20,60 minutes to 0,67 minutes. That’s an incredible time win when doing this with a lot of servers.

As you can see if you have a small set of VM’s or time enough (batch jobs) it might be easier to use the common cmdlets. Those will do the actions you want plus it’s more readable for colleague’s who can understand Powershell but using the VI APIs might be a bridge too far.

I haven’t been able to test it in real practice at the moment, so the extrapolation is based on the amount of time it took to power on and shutdown 5 Test VM’s.