HOWTO: Fix the HPE ILO Amplifier Pack 1.60 Upgrade Failure

2020.04.29 - 16.47.47 - SNAGIT - 0079 - Copy

Recently (yesterday as I write this), HPE released ILO Amplifier Pack 1.60, which **should** be a seamless automatic upgrade from 1.55, but for me it’s been anything but seamless.  Every single ILO Amplifier Pack 1.55 instance I have across all my clients have failed with “System Update Failed” (and no further details) while installing the update, and each have sent this extra helpful message via email:

2020.04.30 - 07.47.04 - SNAGIT - 0083

A call to Proactive Care support and to a couple of my peers indicates I’m not the only one with this issue.  And unfortunately, HPE has moved all the development of ILO Amplifier Pack out of Houston to India, so my normal contacts in Houston were also out of the know.

It was at this point I decided I should (heaven forbid!) read the documentation for 1.60, and I found this handy little tip buried at the back of the release notes:

2020.04.29 - 15.32.49 - SNAGIT - 0069

Yeah – thanks ILO Amplifier Pack dev team – great work – that’s some pretty solid code you have there in version 1.5x / 1.60…  Solution 1 wasn’t the answer since the appliances already had https access to the midway services.   On to solution 2…

The first problem with solution 2 is finding the full download for the install.  To do that, you need to go back to the original download page for the ILO Amplifier Pack and re-register.  The link is:

And this handy because you’ll also need the new registration key they send you shortly if you didn’t keep your original one!

The next step is backing up the existing installation.  Now I don’t who thought this was a great idea, but there are only two way back up the ILO Amplifier Pack…  Either you plug a FAT32 formatted USB key into the hypervisor and pass that through to the VM, or you backup to a NFS share.  I don’t know about you – but I don’t keep USB keys plugged into my ESXi hosts, and we are a Microsoft / VMware shop, so NFS isn’t readily available.

ILO Amplifier Pack dev team – if you are reading this, it would have been so simple to add a download button here…

2020.04.30 - 08.04.22 - SNAGIT - 0084

Considering almost all my customers servers are located in abandoned offices with no staff present thanks to COVID-19, the USB key option was out of the question.  That leaves NFS.

I ended up using haneWIN NFS Server ( which I had purchased a license of many years ago.  It’s also available as a 30 day evaluation trial, so it would probably do fine for you just for this, but I’d really encourage you to purchase a full license as it’s only 29.00 EUR, so it won’t break the bank and you never know when you might need it again (you could even use it to help PXE boot a SPP)!

Once you have haneWIN NFS Server downloaded (I’m using the application version, not the service version), extract it and launch nfssrv-64.exe.  Select Preferences from the Edit drop-down menu.

Go to the Exports tab and enable “Map client root (UID 0) to root for all entries”.  Then click “Edit exports file”.

Delete the 5 example entries at the bottom and then add:

C:\TEMP\nfsd\ILOAMPPACK\ -name:nfs -alldirs

In my case, I’m using C:\TEMP\nfsd\ILOAMPPACK\ as the root of the NFS folder and it’s where I plan to drop the ILO Amplifier Pack (**note** – you need to manually create this path!).  Click Save, then Apply, then Ok.

2020.04.30 - 08.30.11 - SNAGIT - 0087

Now log into your original ILO Amplifier Pack and at the bottom of the Configuration and Settings menu, you will find Backup and Restore.

It should default to the Backup tab.  From here, select NFS from the dropdown box, enter the NFS server’s IP address and path (/nfs), along with a filename for the backup and a password for the backup, and then click Backup Now

2020.04.30 - 09.02.07 - SNAGIT - 0092

If everything worked correctly you should see Backup successful.

2020.04.30 - 09.00.19 - SNAGIT - 0091

Checking the backup folder, you should find your backup file.

2020.04.30 - 09.43.59 - SNAGIT - 0094

Now you can go ahead shutdown the old ILO Amplifier Pack virtual machine and deploy a new one based on the 1.60 image.  Ironically – when you boot the new ILO Amplifier Pack VM, you’ll be given an option “to restore settings from a USB” (exact words).  A USB what I don’t know… 🙂   Where the heck is NFS at?  Come on!  Again – ILO Amplifier Pack dev team – if you are reading this, fix this…

2020.04.30 - 09.57.16 - SNAGIT - 0095

Select Initial Setup and give the new appliance the same settings as the old one…  Log into the WebUI and activate it.  Now go to the Configuration and Settings menu, and select Backup and Restore.  Click on the restore tab this time and enter the same settings you did when you made the backup, then click the Restore Now button.

2020.04.30 - 10.24.22 - SNAGIT - 0098

After a few seconds, your browser should generate an error that its unable to connect to the server, and you should see the VM rebooting.

Once it comes back up after a few minutes, your configuration should be restored and you should be good to go!

And as always:

Use any tips, tricks, or scripts I post at your own risk.

Moving a GoDaddy O365 hosted domain to Exchange on-Prem utilizing EOP

Recently, I had a client acquire an organization that used GoDaddy’s O365 offerings.  My client utilizes Exchange on-prem, and is protected with Exchange Online Protection (EOP), which is part of Microsoft’s O365 offerings (and included with Exchange Enterprise Edition User CALs purchased through Open Value Licensing Software Assurance).  My client wanted to add the acquired organization’s domain name to their Exchange server so the new employees would still be sending emails from the old domain name.  Well that should be no big deal – it was not a huge organization that was acquired, (less than a handful of people), and they didn’t have massive amounts of email in their mailboxes (less than 1GB total amongst all of them), so I figured it would pretty simple.  Log into each O365 mailbox, export to a PST for backup, remove the domain from the acquired organization’s tenant in O365, add it to my customer’s tenant, add it to Exchange on-prem, and set the address policy for these users.

And these steps worked, but not as I had originally planned.  GoDaddy rebrands their O365 offerings in the GoDaddy way of doing things, and completely blocks the users from reaching the real O365 Admin portal, which is where the domain setup for the tenant is.  This means it’s impossible to add or remove domain names from the tenant.  And because my client was using EOP – I had to no option but to remove the domain from the old tenant before I could add the new domain name to the client’s tenant (because it was already bound to the acquired organization’s tenant, and a domain name can only be bound to one tenant at a time in O365).  So off I went to do some research.

The secret to removing the acquired company’s domain name from the original tenant was to use GoDaddy’s rebranded and simplified admin portal to first delete all the mailboxes associated with the acquired company (make sure you export them to PST first!), then once that was done, from the GoDaddy account products portal, select options for O365 and then select “cancel account”.  This only cancels the O365 portion of the account – nothing else.  Once cancelled, go make a quick cup of coffee, and by the time you get back to your desk, the domain name will have been removed from O365, allowing you to then add it to a different tenant through the normal O365 Admin portal setup / domain wizard.

Finally, all that was left to do was add the new domain name to Exchange on-prem, change the default email address of the new employees, test inbound and outbound email as them, and import their PSTs.

And as always:

Use any tips, tricks, or scripts I post at your own risk.

What you need to know about the new HPE Hybrid IT Master ASE Certification exam

As I am sure those of you who are heavily involved in architecting Hewlett Packard Enterprise’s infrastructure solutions consisting of servers, storage and networking already know, there was a new HPE Master level certification announced earlier this year.  This new certification is the HPE Hybrid IT Master ASE, and it is going to be the pinnacle of all HPE certifications.  Many of us that hold Master ASEs in Servers, Storage, and Networking will naturally be looking to obtain this Master ASE certification as well.  In some cases, Partner Ready requirements will drive your need to obtain this certification, but I also know that for many of my peers, it’s a matter of pride and desire to achieve this certification.  However, it really doesn’t matter the reason that drives you to achieve it, I am writing this article to tell you that achieving this new certification isn’t going to be a walk in the park.  HPE opted to take a different path to certification and the traditional testing methods we all know, have tested with before, and are comfortable with have been changed up some for this certification.

By now you are asking yourself how does Dean know about this?  Myself, along with several of my peers from around the globe (many of whom you would likely know too) were honored to be invited join the design team for this certification (and some of related electives for the certification).  When this certification goes live, it will have been a 15+ month journey for some of us, beginning in August 2018.  That journey took us from the initial blueprint of how we wanted to test, to the content of the beta courseware (which was just finished last month), to the certification launch on November 1, 2019.  There are hundreds and hundreds of hours involved amongst us in the design of this certification, the courseware, and of course creating the certification exam its self.  Along the way, there were many phone calls, Skype meetings, face to face meetings at various HPE facilities, and countless hours of reading (and then revising) the alpha and beta courseware material that makes up both the Hybrid IT ASE and HPE Hybrid IT Master ASE courses and exams.  In mid-July (2019) many of us from around the globe gathered in a meeting room at HPE’s campus in Roseville, California to work on the exam creation.

The first thing you’ll notice different is the exam number.  Today, we normally all take proctored HPE0-### exams for our certifications.  The HPE Hybrid IT Master ASE certification will be an HPE1-### series exam, and will not be delivered by Pearson VUE but rather it will be delivered by PSI.  While PSI does have some testing centers, the HPE Hybrid IT Master ASE exam will be an online proctored exam that you will be expected to take at home or at your office – similar to the online proctored HPE0-### exams that are already offered by Pearson VUE.

The second difference you will notice is the length of the exam – you will be given 4 hours to complete it, not the typical 90 or 120 minutes you are used to with the HPE0-### exams (yes – washroom breaks will be allowed).

The third thing you will notice different is both the exam price and the retake policy.  The price of the exam will be between $695 and $895 USD depending on your country of residence, which is more than double the price of today’s HPE0-### exams.  The retake policy is also different.   With HPE0-### exams, you can immediately retake the exam once if you fail it (as long as you have not failed twice in 14 days).  With the new HPE1 exam, there will be an automatic 14-day waiting period after each failure before you can rebook for another attempt.

The fourth thing you will notice is the composition of the HPE Hybrid IT Master ASE exam – it will be broken into 3 distinct sections.  Questions and answers (similar to today’s exams), a research portion, and a hands on portion (more details on all three of these sections is below).  However, for every single item, once you click submit on the answer to the item, there is no going backwards to review or change your answer.

Part one of the exam will consist of a series of Discrete Option Multiple Choice (DOMC) questions.  For those of you that have not seen a DOMC exam before, basically you get asked a question, and are presented with a single answer on the screen at a time – to which you either select YES or NO if the answer is correct for the question.  Each question may have one or more answers that get presented to the test taker (but still only one answer at a time will appear on the screen).  I’ll admit I was very skeptical and concerned when the decision was made to utilize DOMC, but having worked with it for a while now as part of this process, I’m very comfortable with it and I am no longer concerned it will affect your chances of passing or failing.

Part two of the exam will probably start to take some of you out of your comfort zone.  You’ll be given a series of scenarios that you will need to answer questions about.  Some scenarios may build on previous scenarios you were given as well.  You’ll RDP a remote environment, and be required to observe many items in that environment to answer questions about accurately building a solution that properly integrates with that existing environment.  Nothing is off the table here from Synergy frames to storage systems and network switches.  Almost all the Hybrid IT portfolio and their respective management GUIs or CLIs are present here – you’ll need to know where to look to determine if the answer presented to you (via DOMC) is correct.  This is no different from what you’d need to do if you were designing an upgrade for one of your customers.  A simple example is “Your customer wants to do this with their existing environment, do you need to add this particular item to your solution to accomplish this? YES or NO”.

If part two got you out of your comfort zone, then part three is going to really take you far out of your comfort zone…  In part two, you are simply reviewing the exam’s hardware infrastructure and environment, but in part three, you are actually modifying the environment – with very real hardware that you are connected to.  Think of it as having to perform a demo of a feature or something to one of your customers using their existing equipment.

You know all those hands on labs offered at various HPE conferences that you may have attended in the past, but you’ve skipped to spend extra time at the bar in the evenings?  Well those HOL experiences will be very handy here, as it’s very much hands on with the management tools (both GUI and CLI).  Everything from configuring, upgrading, or fixing connectivity issues with Synergy, 3Par, Windows, vCenter, and switches (of all types) is covered here – and you may need to use multiple tools from across the portfolio to accomplish your tasks.  You may use either the GUI or CLI to accomplish your task (or maybe both), but the task must be 100% correct and completed when you hit the submit button.

You will be provided all the appropriate manuals, CLI guides, and documentation you require to complete the tasks – they will available on the server you will be RDPing into.  So it’s opened book so to speak – you’ll have these resources, but only these resources (you won’t be able to search the internet for walkthroughs!).  However, if you have to utilize the provided material to look up how to complete every single little step, you’ll quickly run out of time – the documentation is there to provide you a guide, not tell you how to perform (i.e. for the first time in your life) whatever action it is you need to do.

A word of warning though – as this is real hardware, running in a real datacenter, it is possible for you to completely break the testing environment, which will prevent you from completing your assigned task, possibly resulting a score of zero for the task.  In the real world, if you mess up and accidently destroy or delete something in your customer’s running environment, you’ll have failed in the customer’s eyes.  This is no different – if you break the testing environment here (i.e. maybe you accidentally deleted a volume instead of extending a volume) and are unable to complete the assigned task because of it, then you’ll fail the question.

HPE says this is the first time anyone in the IT certification industry has used real hardware and an automated scoring system in real-time to verify that what you have done is correct.  Spelling counts.  Exactly correct numbers count (i.e. 100MB vs. 1000MB).  If you are asked in a scenario to name something “bigwheel” and you name it “big wheel” with a space (or you typo it as “bigwhel”), then that answer will be marked wrong (although we are told the scoring won’t look at the case sensitivity of the answer, just the spelling, spacing, etc.).  So just like in real life – spelling errors and wrong numbers will result in broken configs, or in this case a wrong answer.  This is completely automated scoring (don’t worry – it’s been fully vetted by your peers already) – so when you hit that final submit button (and I do believe if memory serves me correctly that you’ll be warned that your answer / task is about to be scored if you hit submit), the testing software instantly runs a series of scripts that interrogates everything that makes up the exam’s hardware environment and looks at the relevant output to determine if you’ve correctly accomplished your assigned tasks.  So you’ll know in just a few seconds after hitting that very final submit button if you are the world’s newest HPE Hybrid IT Master ASE or not!

The HPE Hybrid IT Master ASE certification exam is not going to be for the faint of heart.  This certification is going to require you to have several years of real world experience and knowledge in HPE compute, storage, and networking.  And if you think you are going to be able to rely on a brain dump to pass, think again – DOMC, the scenarios on real hardware, the exam cost, and the retake policy (along with some other things I can’t discuss) are going to put a serious crimp on both the quality and quantity of brain dumps that will be available.

So what are my tips to you for achieving this certification?

  • Do take the course.  Yes it is expensive and time consuming, but it will cover (including hands on labs) the concepts and knowledge you must have (aside from the real world experience you should already have) to pass the certification exam.
  • Do not wait to take the exam once you have taken the course – take the exam while the course and hands on labs are fresh in your mind.
  • Be prepared to wait for an exam slot. I think initially it will be hard to schedule an exam due to demand and the limited number of testing slots available per day (given that the exam requires a complete set of real hardware that must be flattened and reset after each exam).
  • Do not wake up one morning and decide to take this exam in the afternoon “cold” without properly preparing.  Many of us do this today at various events we attend (i.e. Aspire, TSS, Discover), and it’s not going to result in an exam pass here.  I know of maybe a handful of my peers in the world that maybe could do that without any preparation and have a reasonable chance of passing.
  • Do read, re-read, and then re-read every single word of every single question on the exam – some of the questions and scenarios are very long with lots of information, and it’s easy to skip over key details, words, or numbers that you will need to accurately answer the question or complete the scenario assignments.
  • Do not be intimidated by the DOMC format – it’s really not as bad as you may initially fear.
  • Do take the practice DOMC exam so you have an idea of what to expect on the real exam. You can find a HPE DOMC practice exam (with examples of ASE level server/storage/networking items) at the following link:

For those of you planning to try to obtain this certification, before you register for the course, I’d suggest you chat with your regional Partner Enablement Manager to see if there are any promotions running for the course and exam (wink, wink, you may find a pleasant surprise).

I would like to wrap up by offering you the best of luck in obtaining the HPE Hybrid IT Master ASE certification and to remind you:

You will truly need to be a Master of HPE Hybrid IT to become a HPE Hybrid IT Master ASE!


Windows 10 Updates failing with error code 0x80244018

Recently at a client site, I setup a new Windows 10 1809 VM to use as a Citrix XenDesktop template.  Based on the OU the VM was in, it had a GPO configured to make it use WSUS (Windows Server Update Services) for updates.  And I could see according to the WSUS console that the VM was 100% patched.  But in Windows 10, when I would go to Windows Update, Windows 10 was insisting there are still updates to install, but the updates would always fail within a minute or so with an error code of 0x80244018.  I even manually downloaded the updates from the MS Catalog Website and installed them using WUSA, and yet the updates still appeared as required in Windows Update, and it would still fail with an error code of 0x80244018.  I won’t mention the several swear words that followed each time it occurred during my troubleshooting!

As most IT admins know by now, Microsoft got rid of logging Windows Updates to C:\Windows\WindowsUpdate.log and switched to using trace files.  This method of logging is virtually impossible to read in it’s native format.  However, Microsoft has create a PowerShell command that will allow you to export those trace files to a good old fashion text file.  You just need to run Get-WindowsUpdateLog and it will spit out all those trace files to a new plain text file on your desktop called WindowsUpdate.log.

So once I had exported the trace files and opened WindowsUpdate.log with Notepad++, I found this (note – I replaced http with hxxp because WordPress loves to auto-convert to URLs without notice):

2019/07/31 08:44:35.0072804 9140  2220  DownloadManager DO job {CD46C84C-73C5-4002-8EC7-B4ACCE3140E2}, updateId = 9B908E38-BA5D-4462-81E6-C52E06A94B1A.1 failed on HTTP 403 error after retries

2019/07/31 08:44:35.0073096 9140  2220  DownloadManager DO job {CD46C84C-73C5-4002-8EC7-B4ACCE3140E2} failed, updateId = 9B908E38-BA5D-4462-81E6-C52E06A94B1A.1, hr = 0x80190193. File URL = hxxp://, local path = C:\Windows\SoftwareDistribution\Download\42dbab2cf910a424dffa836a33879dc0\, The response headers = HTTP/1.0 403 Forbidden: body content-type denied  Content-Type: text/html; charset=iso-8859-1″    “

So Windows Update was ignoring WSUS and connecting directly to MS, even though the GPO was set to use my WSUS server – which is troubling to me, but not the the focal point of this writing today.

I took the File URL and pasted it into IE on the Windows 10 VM and got the following error:

2019.07.31 - 09.11.32 - SNAGIT - 0295

Ok… Now I’ve got the real reason for the 403 error.  So we are using a WatchGuard M370 cluster for this customer, and have an HTTP Proxy configured – which incidentally is blocking Windows CAB files (which it does by default).  So it was simply matter of adding * to the HTTP Proxy Exceptions.

2019.07.31 - 09.29.10 - SNAGIT - 0296

After saving the updated configuration to the cluster, and rebooting the VM, Windows Update successfully downloaded the two outstanding updates and installed them.

So while this doesn’t address why Windows 10 felt the need to pull these updates down directly from the Microsoft servers instead of WSUS, at least now I’m no longer getting error 0x80244018, and I can roll it out to the XenDesktop users.

HOWTO: HTTP boot the HPE Proliant Service Pack ISO DVD using RESTfulAPI to update firmware without messing with WDS or PXE

Most of my customer sites consist of one to four HPE Proliant DL3xx servers running VMware ESXi and an additional HPE Proliant DL3xx running Windows 2012 R2 / 2016. HPE offers some great tools for managing their servers, but unfortunately for smaller organizations, most of HPE’s management tools (and I’m looking squarely at you Insight Control and OneView) take more time to setup and get running correctly then the time you’ll save by installing / updating a small handful of servers manually.  Therefore, I usually don’t deploy these tools to help install OSes or update firmware at my smaller client sites.  I generally just rely on booting the HPE Support Pack for Proliant (SPP) to update firmware, use a USB key with a scripted ESXi install on it for installing ESXi, and utilize WDS to install Windows directly on my Proliants when required.

Prior to HPE Proliant Gen 9 servers, I would PXE boot the Proliant Service Pack using PXELINUX and mount the ISO via NFS.  Then along came Gen 9 with UEFI.  Unfortunately, PXELINUX suffers from a complete lack of support for UEFI.  A couple of times I pestered some of the HPE SPP developers and managers in person while at HPE’s campus in Houston, but they never really showed much interest in explaining or documenting how to get network booting working with the SPP when the server utilized UEFI, so I had pretty much given up on ever getting it to work.

The other day I was playing with the HPE RESTful Interface Tool and decided to try configuring HTTP boot on DL380 Gen10 with the current SPP ISO image (P11740_001_spp-2018.11.0-SPP2018110.2018_1114.38.iso).  Much to my surprise, after modifying only a single configuration file on the ISO image, I was able to successfully boot the current SPP ISO image via HTTP and run a full firmware update on the Gen10 I was playing with.

The nice thing about this method is that because it is all done via HTTP, you don’t have to mess with or disable your WDS (Windows Deployment Services) server to add Linux support (which is what the SPP ISO is based on).  So this is great news for pure Windows shops!  And as a bonus, these steps works with Gen 9 servers too.

So how did I do it?  Before I share that, as always:

Use any tips, tricks, or scripts I post at your own risk.

First, you need to slightly modify the SPP ISO image.  Copy the original SPP ISO image to your web server (i.e. c:\inetpub\wwwroot).

Open the ISO image with your favorite ISO editor and extract \efi\boot\grub.cfg, then open the grub.cfg with a decent text editor (i.e. Notepad++, but definitely not the built-in Windows Notepad).  Scroll down the first menuentry, which will be “Automatic Firmware Update”.  Then copy and paste the following just above that menuentry:

menuentry "HTTP Firmware Update Version 2018.11.0" {
set gfxpayload=keep
echo "Loading kernel..."
linux /pxe/spp2018110/vmlinuz media=net root=/dev/ram0 ramdisk_size=10485760 init=/bin/init  iso1= iso1mnt=/mnt/bootdevice hp_fibre cdcache TYPE=MANUAL AUTOPOWEROFFONSUCCESS=no modprobe.blacklist=aacraid,mpt3sas  ${linuxconsole}
echo "Loading initial ramdisk..."
initrd /pxe/spp2018110/initrd.img

So your grub.cfg will look like this when you are done:

2018.12.20 - 17.45.17 - SNAGIT - 0027

Adjust the http address (, path, and ISO image name as required for your network, then save the updated grub.cfg and inject it back into the ISO image, over-writing the existing \efi\boot\grub.cfg, and then save the updated ISO image.

Be sure to add the .ISO mime type to your web server so that the ISO file type can be handled correctly.  The command below will work with IIS 8.5 and above to add a new mime type to IIS for .ISO.

C:\Windows\System32\inetsrv\appcmd.exe set config -section:system.webServer/staticContent /+"[fileExtension='iso',mimeType='application/iso']"

Now, you need to install the HPE RESTful Interface Tool on your machine.  The current version at the time of this writing is  Go to the Hewlett Packard Enterprise Support Center and search for “RESTful Interface Tool for Windows”, then download and install the .msi (there is a Linux version available as well there).

Once the HPE RESTful Interface Tool is installed, run it as an Administrator.  Next, you need to connect to your server’s ILO, select the Bios object, set the UrlBootfile Entry and commit the changes.

*** NOTE: Make sure the UrlBootFile entry matches the url of your ISO image that your put on your webserver and specified as the iso1 switch in the grub.cfg entry.

login ilo_ip_address -u admin -p password
select Bios.v1_0.0
set UrlBootFile=

2018.12.19 - 13.56.41 - SNAGIT - 0003

This takes care of the changes you must make to your Proliant server (keep in mind each server that you want to HTTP boot needs to have this this done).

The next time your server boots, the UrlBootFile change will be applied at the end of POST, then server will automatically reboot and start to POST again.

2018.12.19 - 14.18.08 - SNAGIT - 0005

That’s it – your configuration is all done.  Now when you reboot your server, if you hit F11 for the Boot Menu, you’ll have an entry for HTTP there – select it.

2018.12.19 - 14.20.01 - SNAGIT - 0006

After maybe 30 to 45 seconds (depending on your network speed – I’m using 10GbE), you’ll see the familiar SPP boot menu, but with an extra entry which is set as the default entry.

2018.12.19 - 14.21.25 - SNAGIT - 0009

Select it, and after about a minute (again – I’m using 10GbE) you’ll see the ISO image get mounted.


If the image fails to mount, verify you are able to download the image you specified as the UrlBootFile from your PC.  If that works, then verify that the grub.cfg is correctly updated, with no typos.  Also – verify your server has 16GB+ of RAM in it, as the grub entry creates a 10GB RAM disk.  You may also need to upgrade the ILO firmware and drivers to current builds (such as 2.61 for ILO4 or 1.39 for ILO5) before using the iLOrest tool.

If you so desire, you could also set the new grub entry to be totally automatic by grabbing the proper switches out of the “Automatic Firmware Update” entry.  I suspect it may also be possible to split the ISO and boot one ISO without the packages folder (so it boots quicker) and mount a second the ISO with the packages folders still there to run the upgrades from.  Just to be clear, I haven’t tested that yet – it’s just a theory at this point.

I have tested this by HTTP booting over a branch office VPN tunnel which tops out at 100Mbps – it took a while for the image to load (I didn’t time it as I was working on other things at the time), but it did eventually load and it successfully updated the remote server.

When the next Support Pack for Proliant is released, all you need to do is update the grub.cfg with the correct paths and copy the updated ISO to your webserver with the same file name you used here.  You shouldn’t need to adjust the UrlBootFile on your servers.

Happy updating!



HOWTO: Set the creation and modification timestamp on a file via #PowerShell

Recently, I updated one of our internal tool kits, and then packaged it for distribution.  It was a busy day when I updated it, so I didn’t manage to package it on the same day as I had updated / built / compiled it.  Internally, we use the date as the version number of the tool (occasionally suffixed with a letter which indicates my screw-ups in the build process on that given day).  In this particular case, the version number was 2018-11-24b, indicating I updated it on 2018-11-24, and that this was the 3 revision (no suffix, a, then b) that I had created on 2018-11-24 (I found bugs in the first two after testing the packaging).

Because I wasn’t packaging on the same day as I updated it, the time stamps on my archives didn’t match the build date, so I need to change them – all of them!  So I figured up PowerShell and used it instead.  Below are the commands necessary to view and set both the creation and modification timestamps on a file via an elevated PowerShell prompt.

As always – Use any tips, tricks, or scripts I post at your own risk.

To view the file creation timestamp:

(Get-ChildItem “c:\path\file_to_change.wim”).CreationTime

To set the file creation timestamp:

(Get-ChildItem “c:\path\file_to_change.wim”).CreationTime = ’11/24/2018 11:24AM’

To view the file modification timestamp:

(Get-ChildItem “c:\path\file_to_change.wim”).LastWriteTime

To set the file modification timestamp:

(Get-ChildItem “c:\path\file_to_change.wim”).LastWriteTime = ’11/24/2018 11:24AM’

To set the creation and modification timestamp on every single file in a folder:

foreach ($objFile in Get-ChildItem “c:\path\*”) {$objFile.Creationtime = ’11/24/2018 11:24AM’}

foreach ($objFile in Get-ChildItem “c:\path\*”) {$objFile.LastWriteTime = ’11/24/2018 11:24AM’}



HOWTO: Permanently replace the ugly Windows 10/2016 login screen background and colors for all users with #PowerShell

I can’t stand the default Windows 10 and Windows Server 2016 logon background, and one of the first things I do when I build a new Windows template at a customer site is wipe that default background out!  I typically replace it with a single solid color, and I’m kind of fond of the old blue backgrounds that came with Windows XP (or was it Windows NT 4 – or may Windows 2000, I don’t remember now) as they are easy on the eyes… Anyways – the background color I like and use has a RGB value of 58 110 165.

I used to have a basic batch file to wipe it Microsoft’s stock background out by copying an existing background over from my staging server, but with every iteration of Windows 10 and Windows Server 2016, the path to img100.jpg in C:\Windows\WinSxS changes.  So last night I decided it was time to use some PowerShell to take care of this menace and allow the script to run on multiple platforms and software updates.

I struggled with creating a new solid color background jpg in PowerShell using the RGB value I wanted, but eventually I found some code that someone had posted elsewhere on how to create a gradient jpg, so I snagged it and set the gradient to be same at the end as the beginning, which results in a solid color all the way across.  I’m sure someone with better skills than me could clean this up properly – but this suits my purposes for what I need so I stopped searching for a better way.

So basically what this script does is create a new jpg that is 640×480 in C:\Windows\Web\Wallpaper\Staging, adjusts the accent colors for the current user and the default user profile, finds the path to img100.jpg and replaces it after taking ownership and setting appropriate ntfs rights to it, then clears out the lock screen jpgs using RoboCopy.  The lock screen jpgs are owned by the System account, and Robocopy /mir /zb is the simplest way to wipe them out that I know of without using Sysinternals Suite psexec to involve System account privileges and delete the jpgs.

You definitely need to run this in an elevated PowerShell session too!

As always – Use any tips, tricks, or scripts I post at your own risk.

New-Item -Path "C:\Windows\Web\Wallpaper\Staging" -ItemType "Directory" -Force -Confirm:$false | out-null
Add-Type -AssemblyName System.Drawing
$newbackground = New-Object System.Drawing.Bitmap 640, 480
(New-Object System.Drawing.Drawing2D.LinearGradientBrush(
(New-Object System.Drawing.Point(0, 0)),
(New-Object System.Drawing.Point(640, 480)),
[System.Drawing.Color]::FromArgb(58, 110, 165),
[System.Drawing.Color]::FromArgb(58, 110, 165))),
0, 0, $newbackground.Width, $newbackground.Height)
copy-item -path C:\Windows\Web\Wallpaper\Staging\background.jpg -destination c:\windows\web\wallpaper\background.jpg -force -confirm:$false
REG ADD "HKEY_USERS\ZZZ\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "StartColor" /t REG_DWORD /d 0xffa66c39
REG ADD "HKEY_USERS\ZZZ\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "AccentColor" /t REG_DWORD /d 0xffb51746
REG ADD "HKEY_USERS\.DEFAULT\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "StartColor" /t REG_DWORD /d 0xffa66c39
REG ADD "HKEY_USERS\.DEFAULT\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "AccentColor" /t REG_DWORD /d 0xffb51746
REG ADD "HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "StartColor" /t REG_DWORD /d 0xffa66c39
REG ADD "HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "AccentColor" /t REG_DWORD /d 0xffb51746
REG ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Accent" /f /v "DefaultStartColor" /t REG_DWORD /d 0xffa66c39
takeown /f C:\ProgramData\Microsoft\Windows\SystemData /a /r /d y
takeown /f C:\Windows\Web\Screen\img100.jpg /a
icacls C:\Windows\Web\Screen\img100.jpg /grant Administrators:F
$lockscreen = "C:\ProgramData\Microsoft\Windows\SystemData\S-1-5-18\ReadOnly\LockScreen_Z"
$tempfolder = "C:\ProgramData\Microsoft\Windows\SystemData\S-1-5-18\ReadOnly\LockScreen_Temp"
$img100 = Get-ChildItem C:\Windows\WinSxS -Recurse -Include img100.jpg
write-host $img100
takeown /f $img100 /a
icacls $img100 /grant Administrators:F /q
copy-item -path c:\windows\web\wallpaper\background.jpg -destination $img100 -force -confirm:$false | out-null
copy-item -path c:\windows\web\wallpaper\background.jpg -destination C:\Windows\Web\Wallpaper\Windows\BlueBackground.jpg -force -confirm:$false | out-null
copy-item -path c:\windows\web\wallpaper\background.jpg -destination C:\Windows\Web\Screen\img100.jpg -force -confirm:$false | out-null
New-Item -Path $tempfolder -ItemType "Directory" | out-null
Robocopy $tempfolder $lockscreen /zb /mir /njh /njs
Remove-Item -Path $tempfolder -force -confirm:$false | out-null


HOWTO: #PowerShell script to download, extract and add #SysinternalsSuite to the path

I absolutely love Microsoft’s Sysinternals Suite – it’s an amazing set of tools for troubleshooting and tweaking Windows machines.  Heck – there isn’t a day goes by that I don’t use at least one of the tools out of the suite.  I generally try to download, extract and add the suite to the path of any computer I touch.

This morning while building a new 2016 template for a customer, I realized I had missed downloading and adding it to the path, but the VM was in a firewalled VLAN and unable to reach my staging and support server – so I couldn’t just grab the extracted directory from my staging server.  This got me to thinking there must be a simple way to use a cli or script to download, extract, and add the extracted folder to the computer’s path.  So I took 30 minutes and wrote one.

Basically, this script can be cut and pasted into an elevated PowerShell session, and it will grab the most recent from Microsoft, extract the .zip to C:\Program Files\SysinternalsSuite, and then add C:\Program Files\SysinternalsSuite to the computer’s path if it does not already exist in the path.

I’ve tested this with Windows 7, Windows 10 (1803), Windows Server 2012 R2 and Windows Server 2016.

As always – Use any tips, tricks, or scripts I post at your own risk.

Import-Module BitsTransfer
$url_zip = ""
$output_path = "C:\Program Files\SysinternalsSuite"
$output_zip = $output_path + '\'
Remove-Item -Path $output_path\*.* -force -confirm:$false
New-Item -Path $output_path -ItemType "Directory" -Force -Confirm:$false | out-null
Start-BitsTransfer -Source $url_zip -Destination $output_zip
Add-Type -AssemblyName System.IO.Compression.FileSystem
function Unzip
param([string]$zipfile, [string]$outpath)
[System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $outpath)
Unzip $output_zip $output_path
Remove-Item -Path $output_zip -force -confirm:$false
$oldpath = (Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment" -Name PATH).path
If ($oldpath -NotLike "*SysinternalsSuite*") {
$newpath = "$oldpath;$output_path"
Set-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment" -Name PATH -Value $newPath
$writtenpath = (Get-ItemProperty -Path "Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment" -Name PATH).path
write-host $writtenpath



HOWTO: Manually uninstall Citrix StoreFront after a 1603 MSI Installer error during an upgrade or uninstall (#Citrix #StoreFront #msiexec)

Many of my clients utilize Citrix XenDesktop or XenApp and thus Citrix StoreFront.  Once it is initially configured and running, things are generally pretty smooth going.  But when it comes time to perform in-place upgrades of Citrix StoreFront, sometimes things get a bit hairy and go off track, usually ending up with a dreaded 1603 MSI installer error.  Then you are royally screwed because the StoreFront installation is half installed (or half uninstalled if you are an optimist) and you can’t repair, reinstall, or even uninstall using normal methods.  Below are the notes I’ve developed for myself and my support team to manually uninstall StoreFront should the need arise – which it does, often.


These notes are based on single server stand alone installs of Citrix StoreFront versions, , and (as in I’ve used these notes to manually uninstall those versions before).  I have used these notes on XenApp 6.5 servers and on XenDesktop 7.x controllers without any issues.  Your mileage may vary though.

As always – Use any tips, tricks, or scripts I post at your own risk.

**Warning** Reboot and take a VM snapshot of the StoreFront server before doing anything else.  A reboot is a requirement before doing anything with StoreFront, it doesn’t matter if you are doing an install / upgrade, or are already screwed and need to manually uninstall – reboot before continuing!!!  And if you do not reboot – YOU WILL GET ERRORS that will prevent the instructions below from working.

Immediately after you have rebooted, open an elevated Command Prompt and remove all thumbs.db files on the StoreFront server which can be locked opened by Windows Explorer and cause the uninstaller to fail:

cd \
del /s thumbs.db


Next, verify that the HTML5 Client is actually installed on the machine, otherwise the uninstaller will likely fail later on.

msiexec /i "C:\Program Files\Citrix\Receiver StoreFront\Features\HTML5Client\template\HTML5Installer.msi"

If you get a repair / remove Windows Installer dialog box, then it is installed and you can just exit the installer, otherwise install using the default settings.

Open the StoreFront MMC, and if it allows you (which it likely won’t), delete all your stores.

Open Add/Remove Programs and uninstall the Citrix Receiver if it is installed.

Open an elevated Powershell console.  Add the Delivery Services Framework snapin,  remove all the Feature Instances, then confirm they are all removed.

**Note – only add the single snapin listed below, otherwise you potentially will end up with files locked open during the removal process, which can cause the removal to fail**

### Add the Citrix Delivery Services Framework Powershell Snapin
add-pssnapin Citrix.DeliveryServices.Framework.Commands

### Remove all DS FeatureInstances
Remove-DSFeatureInstance -all -confirm:$false

### Verify all FeatureInstances are deleted - you should see just {} listed

### If any FeatureInstances are still listed, remove manually them with the next line, otherwise skip to Uninstall-DSFeatureClass
remove-dsfeatureinstance -featureinstanceid feature_name


Stop the Storefront services if they are still running before continuing.

Close Powershell

**Note – it is very important you close the PowerShell console at this point and reopen a new one before continuing below and attempting to remove StoreFront’s Feature Classes, otherwise the removal of the Feature Classes will fail**

Open a new elevated Powershell console (see warning above).  Add just the Delivery Services Framework snapin,  remove all the Feature Classes, then confirm they are all removed.

###Add the Citrix Delivery Services Framework Powershell Snapin
add-pssnapin Citrix.DeliveryServices.Framework.Commands

###Remove all FeatureClasses
Uninstall-DSFeatureClass -all -confirm:$false

###Verify all Feature Classes have been removed - you should only see {} listed


If there are no DSFeatureClasses are still listed, skip to Citrix.DeliveryServices.UninstallUtil.exe below.  Otherwise, some extra manual cleanup is going to be required.  Using your favorite text editor (Notepad++ in my case), open the Framework.xml file (I usually just run the following from an elevated command prompt.

start notepad++ "C:\Program Files\Citrix\Receiver StoreFront\Framework\FrameworkData\Framework.xml"

Within Notepad++, search for the tag “<Type>” to get all the guid’s of any remaining Feature Classes.

Back in the elevated Powershell console, repeatedly run the Uninstall-DSFeatureClass for each <Type> guid you found in the Framework.xml:

Uninstall-DSFeatureClass -Type {guid}

Run the uninstall for each guid one at a time – if you get an error, don’t worry about it, skip it and continue on with the next one.  Once you have gone through all of the guids, run:

Uninstall-DSFeatureClass -all -confirm:$false

Verify all DSFeatureClasses have now been removed.  You may need to repeat the above three steps a few times to completely remove all the DSFeatureClasses due to dependencies within them.

**note – don’t forget to reload/refresh Framework.xml in your text edit of choice if you need to go back and do it again to the list of the remaining DS Feature Classes**

Once all DS Feature Classes have been removed, close PowerShell and open an elevated Command Prompt and run Citrix.DeliveryServices.Install.Uninstall.exe.



StoreFront should successfully uninstall for you now and disappear from Add/Remove programs.  Reboot the machine from the elevated command prompt:

shutdown /f /r /t 0

After logging back in, open an elevated Command Prompt and cleanup any leftover folders by running:

rd /q /s "C:\ProgramData\Citrix\DeliveryServicesUninstall"
rd /q /s "C:\Program Files\Citrix\Receiver StoreFront"

Don’t worry if you get a “The system cannot find the file specified” error message – that just means the folder has already been removed or doesn’t exist anymore.

Finally, using Windows Explorer navigate to C:\Inetpub\wwwroot and verify the Citrix directory has been removed – if it has not been removed, manually check it’s contents for anything you need to keep and then delete C:\Inetpub\wwwroot\Citrix.

You should now be ready to install a fresh version of StoreFront.

HOWTO: Send “Shift + F10” to a remote #VMware / #vPro console over #RDP

We typically do a lot of our work at client locations over RDP to a server that resides on-prem in the client’s data center.  We then use that on-prem server as a jump server to manage other systems that reside on-prem and for the most part it works well except for the occasion keyboard combo press that just won’t go through to the client machine we are remoted into via the RDP jump server.  This is particularly a problem is when we are deploying Windows images to machines, which we do either via Intel’s vPro KVM controls (if it is physical) or via VMware Console (if it is virtual).  Occasionally the Windows machine will fail to boot during the specialize phase of sysprep, and we need to troubleshoot the issue.

If we were physically sitting in front of the machine, the process would be pretty simple – hit Shift + F10 and a command prompt pops open, and from there you navigate to C:\Windows\Panther and access the sysprep logs using Notepad.  But in our case, because we are utilizing a jump server via RDP to access the console (either vPro or VMware), Shift + F10 is being intercept by the jump server and not passed on to the vPro KVM session or the VMware Console, which means we can’t get to a command prompt to start troubleshooting.  When this happens, we need to disconnect from the RDP session, and use either RealVNC Plus (for the vPro console) or the VMware client directly from our local machines over VPN, which in some cases is deathly slow at best.

After getting stuck the other day having to troubleshoot a sysprep error over VPN using vPro instead of RDP using vPro, I decided there had to be a way to script a hotkey to send Shift + F10 to the console via RDP.  Unfortunately, I didn’t find anything readily available, so I scripted my own using AutoIt, which I then compiled into an .exe and digitally signed with my code signing certificate.

Basically the script searches for a window that has a title of “Intel(r) AMT KVM – VNC Viewer Plus” or a window that has a title that contains ” on ” (as in Windows7 on ESXiHost” and makes the first instance it finds the active window, then sends a Shift + F10 to the window.

Now when I have to send a Shift + F10 to a remote console during troubleshooting, I simply run the correct executable on our RDP jump servers and up comes the command prompt in the remote console!

Below are the two AutoIt scripts and further down are the two compiled .exe files.

As always – Use any tips, tricks, or scripts I post at your own risk.

Code for Intel(r) AMT KVM – VNC Viewer Plus:

$Title = "Intel(r) AMT KVM - VNC Viewer Plus"
WinWait ($Title)
WinActivate ($Title)
Send ("+{F10}")
Sleep (100)

Code for VMware Console:

Opt("MouseCoordMode", 0)
Opt("WinTitleMatchMode", 2)
$Title = "[REGEXPTITLE:(?i)(.* on .*)]"

WinWait ($Title)
WinActivate ($Title)

MouseClick ( "right",9,88,1 )
Send ("+{F10}")
Sleep (100)

Already compiled and digitally signed AutoIT executables: