HOWTO: Restore a dead or deleted vCenter server from an @HPE_Simplivity backup

This morning was vCenter update day for me. I had 15 customer vCenter instances that all needed upgraded from 7.0.3.01000 to 7.0.3.01100, so I grabbed a cup of coffee and got started. 14 of the 15 completed with out a hitch, but there is always one! This one vCenter server failed to install the patch, leaving me with a dead vCenter. And this particular vCenter is residing on an HPE Simplivity cluster.

In case you didn’t know, Simplivity has it’s own built in backup and restore mechanism, which is generally accessed via the vCenter client. Which is cool, until your vCenter is dead, and you need to restore your vCenter from those backups, which is done via vCenter (that same dead vCenter you are attempting to restore). Then what do you do? HPE’s documentation on this isn’t super clear. I’d been down this same road earlier this year, so I had already trudged through the framework of what to do once, but I actually hadn’t written down. So this time – not only am I documenting it, I’m sharing it with you!

And as always before I begin:

Use any tips, tricks, or scripts I post at your own risk.

Open Putty and ssh one of the OmniStackVC VMs.

Login as svtcli / yourpassword (this is your emergency password)

Find the available backups:    svt-backup-show --emergency

The first column shows the Datastore name.  The second column is the VM name.  The third column is the backup name and will generally correspond to the backup time. It’s possible to do more granular searches with svt-backup-show. Use --help to get the parameters if you need to narrow down the results.

If the VM has been deleted, then it’s name will show as “VMNAME [Deleted] YYYY-MM-DDTHH:MM:SS+OFFSET” in this list (i.e. “VCENTER01 [DELETED 2022-12-10T13:20:34+0000]” in my example below)

**Note** Your text may be wrapped in Putty – I recommend copying and pasting the text out of Putty into Notepad++ or some other editor for easier reading.

To restore the VM, you’ll need to know the Datastore, the Object, and Backup Name (which is the time of the backup) you are restoring.  The syntax for a restore is this:

svt-backup-restore --datastore “Datastore --vm “Object --backup “Backup Name--emergency --force

So in my case, it was:  svt-backup-restore --datastore “SVT-DS02--vm “VCENTER01 [DELETED 2022-12-10T13:20:34+0000]--backup “2022-12-10T07:00:00-04:00--emergency –force

If everything worked correctly, you should see a Task Complete. The VM will then be restored into a new folder on the original datastore.

**Note** It may take a minute or two before the restored VM actually appears on the datastore. Be patient! If you simply hit the up arrow and hit enter again to run the restore again, you’ll end up with another copy!

If your original VM has been deleted, then you can safely rename this folder as required to match the original VM’s name.  I’m taking these screenshots after the fact, so the existing VCENTER01 shown below is the one I restored earlier this morning (and is now back into production) which inspired this writing – the VCENTER01-restored-blahblahblah is the one I just restored in the screenshots above for my documentation.

Now you can log into the WebUI of one of your ESXi nodes as root, register the recovered vCenter, and power it on. To register the VM, right click Virtual Machines, select “Create/Register VM”, select “register an existing Virtual Machine”, navigate to the datastore and select the restored .vmx file.

**Note** I’m not particularly happy with the editor in WordPress anymore… If anyone knows how I can write these posts in Outlook or Word and then copy and paste (including the formatting) into WordPress, please let me now.

HOWTO: Mass deleting orphaned @HPE #StoreOnce Catalyst items via cli

Recently, I had a customer go through a merger, and they inherited another StoreOnce located at a remote site.  We made the decision to enable Catalyst copy from the customer’s existing StoreOnce to the inherited StoreOnce to enhance the customers backup and recovery strategy.  The only issue was the size of the existing StoreOnce Catalyst store was larger than the available capacity on the inherited StoreOnce, which already had the capacity expansion licensed and installed.

Upon further investigation I discovered that the customer’s Catalyst store had several thousand orphaned Veeam backups from over the years that were no longer present in the VBR database, nor where they picked up by Veeam when rescanning the repository.  Deleting these orphaned Veeam files would easily free up enough space in the source Catalyst store to match what was available in the in inherited StoreOnce.  All I needed to do was delete these orphaned files!

This however was much easier to say than to do.  Because Veeam wasn’t detecting them, I couldn’t use the VBR interface to just select them and delete them from disk.  The StoreOnce 4.x WebUI includes the option to list the items in the Catalyst store, and delete them.  Unfortunately, it only allows you to select one item at a time, then click delete, and then click through an “are you sure” warning.  All told, it probably takes about 8 to 11 seconds per item to delete it, then you need to navigate through the items list again to find the next aged item and repeat this process.  This is fine if you only have a handful of items you need to delete.  I had somewhere beyond 5800 items to cleanup!

I recalled that HPE offers a tool called “HPE StoreOnce Catalyst Copy Utility”.  It is specifically designed to be used to copy backup items to alternate StoreOnce appliances for safekeeping, delete backups that are obsolete or orphaned, and synchronize backup copies between a primary backup target and a disaster recovery site.  It can be downloaded from the HPE Software Center (https://myenterpriselicense.hpe.com). What I found out though is the documentation with regards creating the credential file is a bit sparse, so I’m going to take the time explain how to actually use the tool here.

And as always before I begin:

Use any tips, tricks, or scripts I post at your own risk.

Once you have downloaded the tool from the HPE Software Center, run the installer and accept all the defaults.  If you are on a Windows machine, this means it’s going to install to C:\Program Files\HPE\StoreOnce\isvsupport\HPE-Catalyst-CATTOOLS

The HPE StoreOnce Catalyst Copy Utility is strictly a console based app – there is no GUI at all.  To get started, open an Administrative Command Prompt and navigate to C:\Program Files\HPE\StoreOnce\isvsupport\HPE-Catalyst-CATTOOLS\bin

The first thing you need to do is create an encrypted password file for your Catalyst store.  To do this, you run:

StoreOnceCatalystCredentials.exe  –add -u UserName –s StoreOnce_IP –o pass.txt

Note – the UserName is the username with permissions to the Catalyst Store, which may or may not be the same as the Admin password to the StoreOnce (in fact, from a security perspective, it should be totally different!). If you copy and pasted these command lines, take note that your browser may replace the double dash with a single dash causing the commands to fail.

(You’ll also note that some of my screenshots are blurred and some are not… I got side tracked in the middle of writing this and became lazy since there really isn’t anything here that is secret anyways).

Now that we have our password, lets make sure can connect to the Catalyst Store.  To do this, run:

StoreOnceCatalystCopy.exe –list –origin “StoreOnce IP” –origin-store “CATALYST_STORE_NAME” –username “USERNAME” –password-file pass.txt

You should get a summary back similar to below that shows the current Catalyst Copy Jobs status.

Back in the WebUI, I’ve filtered by “create date” to find those really old orphaned backups.  In my example here, I’m going to remove all the files created prior to May 24 (which is 5 files in this example – and will also break the Veeam backup chain for a couple of them – just something to keep in mind!)

To delete these files with HPE StoreOnce Catalyst Copy Utility, the syntax is:

StoreOnceCatalystCopy.exe –delete-items –filtercreateddaterange [dd/mm/yyyy-hr:mm:ss]:[dd/mm/yyyy-hr:mm:ss] –origin “StoreOnce_IP” –origin-store “CATALYST_STORE_NAME” –username “USERNAME” –password-file pass.txt –force

So in my case I’m going to delete everything created between January 1, 2018 and May 24, 2020, so it would be:

StoreOnceCatalystCopy.exe –delete-items –filtercreateddaterange [01/01/2018-00:00:00]:[24/05/2020-00:00:00] –origin “192.168.99.29” –origin-store “VEEAM01” –username “dcc” –password-file pass.txt –force

As you can see, the HPE StoreOnce Catalyst Copy Utility has removed the 5 files older than May 24, 2020.  It took only a few seconds in total. 

And these deletions are now reflected in the WebUI once I refresh it.

For a full list of the options, advanced filters, and settings related to the HPE StoreOnce Catalyst Copy Utility, be sure to download the user guide from the same page you downloaded the utility from at the HPE Software Center.

And the 5800+ items I had to purge? It was around 294 TiB of capacity and it took a little under 2 hours to complete with this method. The StoreOnce Housekeeping Space Reclamation process is working away at reclaiming all that capacity now.

Workaround: When the #Windows10 Windows Hello setup UI won’t open…

Recently while traveling on the road for 3 weeks, my brand new notebook with Windows 10 x64 Enterprise Edition (Fall Creators Update) started blue screening at boot (safe mode wouldn’t even start).  I really didn’t have the ability to take the time to troubleshoot it too deeply and none of the standard Windows 10 repair functions worked.  In the end I used “Reset my PC” which seems to have solved the blue screen of death, but left me with no installed applications (which really sucked) although my user profile was mostly left intact.  I actually had to use “Reset my PC” 3 days in a row at one point so I could work, and then finally my notebook seemed to return to a stable working condition – until Saturday morning that is.

Well enough was enough – I can’t trust the Windows installation not to give me grief again in the future, and since I use Veeam Agent for nightly backups, I decided to start over by booting WinPE and running Diskpart then Clean on my two SSDs.  I reinstalled Windows 10 x64 Enterprise Edition (Fall Creators Update), along with all the zBook 14u G4 drivers, and proceeded to setup my notebook like I always would.  After several hours of installing and configuring software and restoring about 1TB of data from Friday night’s Veeam backup, I had two things left to do.  Configure my fingerprint reader and encrypt my drives with Symantec PGP Corporate Desktop.

So Windows Hello needs a PIN before you can add a fingerprint – annoying as hell, but necessary… I mean – I have a strong password to protect my account – why the hell should I have to add relatively weak PIN to enable my finger print (but I will digress on this rant and get to what’s important here).  So I go to Settings –> Account –> Sign-in Options and add my PIN.  Now that my PIN is added, I click on the Add Finger Print button…

2017.12.04 - 13.50.07 - SNAGIT - 0005

The Windows Hello setup UI opens and then immediately closes – basically just a flash…  I hit it again and still no go.  Uh-oh… Reboot?  Nope.  Drivers?  Nope.  After some Googling and removing all fingerprint data from the BIOS, I’m still no further ahead and out of leads.

Hmm – I used a new tool to customize my profile and did a few changes to my profile that I normally haven’t done in the past – what if it is my user profile fighting with UAC that is causing this?  So I log in as Administrator and find the “Finger Print” button is greyed out.  Ok, well maybe I need to do it as a standard user, so I open a command prompt and create a new user with:

net user testuser password /add

I logout as Administrator and login as testuser.  When I open Settings –> Account –> Sign-in Options, the Finger Print button is active, and when I click, it opens and allows me to scan my finger.  It appears the issue is my account / profile as opposed to something specifically in Windows 10.  This is a good news / bad news scenario.  I don’t need to reinstall Windows yet again, but I don’t want to have to spend hours re-configuring my profile yet again either.  So it’s time to get creative.

Here are the steps I used to get my finger print registered in Windows (note – this doesn’t fix the problem long term, it just works around it for now, which is all I need).

  1. Reboot the machine and login as Administrator
  2. Navigate to C:\Users and rename my user profile folder to C:\USERS\JBGEEK.good
  3. Open Regedit and navigate to HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList and removed the entry that existed for my user profile.
  4. Open Computer Management, navigated to Users and Groups and removed my user account from the Administrators group.
  5. Logged out as Administrator
  6. Logged in with my userid (which creates a new user profile associated to my SID)
  7. Opened Settings –> Account –> Sign-in Options and clicked on Finger Print – the Windows Hello setup UI for finger prints opened and allowed me to register my fingers
  8. Rebooted Windows and verified I could login with my finger print from CTRL+ALT+DEL
  9. Rebooted Windows again to ensure my profile was unloaded and logged in as Administrator
  10. Navigated to C:\Users and renamed my newly create user profile to C:\USERS\JBGEEK.temp and then renamed my good user profile back to C:\USERS\JBGEEK
  11. Open Computer Management, navigated to Users and Groups and re-add my user account to the Administrators group.
  12. While in Computer Management, I also deleted my testuser and removed it’s user profile too.

I am now able to log into Windows using my finger print (without having lost my profile settings and data).  And although the Add Finger Print UI still doesn’t function for me, but I really don’t care because it is not like I plan to grow any new fingers anytime soon that I will need to register in Windows until the next reinstall!

Anyways – hopefully this blog will help someone else stuck in the same boat.

HOWTO: Using #PowerShell to ensure a #Veeam USB Repository always has the correct drive letter

Some time ago I had a customer who switched completely to Veeam from Backup Exec (yeah baby!!!).  Offsite replication of any sort was out of the question as the customer simply couldn’t get the necessary bandwidth from any of the ISPs that serviced the area at anything that even approached an unreasonable price.  Instead we opted for a HPE StoreOnce for onsite and some sort of removable drive system for daily offsite.  The customer insisted on using Western Digital My Books (USB3) due to their capacity (6TB) and semi-ruggedness for the offline repo.  The drives are rotated out each day by a staff member for a new drive and they send the previous night’s drive offsite.

The My Books were originally setup using F: as the drive letter.  This would work great until someone plugged a USB key into the server and it was auto-assigned F: by Windows 2012R2, or if someone logged in who had a drive mapping of F:.  And then other days, Windows would randomly assign a drive letter other than F: even though F: was available.  At this point, Veeam would puke because it couldn’t find the repo anymore.  And occasionally the customer would forget to swap the drive for a fresh one (or even correct drive some days), which meant there wouldn’t be enough free space on the repo to complete that night’s backup (about 4.9TB is processed daily).

To work around all this, I ended up writing a PowerShell script to always map the My Book to Z:, clean the previous backups off of it (if any were found, or if the wrong drive with a different backup set was plugged in), and start the actual backup job.  And if the script can’t find the My Book or free disk capacity is less than 95% after attempting cleanup, it will abort the backup and send an email warning to the support team that the backup was aborted.  Rather than have Veeam schedule the job to run, I use Windows Task Scheduler instead to run the script, then use the Veeam PowerShell module to start the backup job from inside the script after I’ve verified Z: is present.

This script assumes that every rotated My Book drive has been formatted and has a volume label containing “WD MY BOOK”.  It also assumes the SMTP server is called mail dot whatever the DNS domain name of the machine running the script is (i.e. my AD is jbgeek.net, so mail.jbgeek.net).  All of my client sites have a cname for their Exchange server called “mail” created in their AD DNS zone – which means I can cut and paste the same script without modification from one client site to another, which cuts down the chance of an editing error, and makes it quicker to deploy new scripts when necessary for my team.

You will need to adjust the “Get-ChildItem -Path” (lines 21 to 26) directories to fit your environment, along with the $SendEmailTo variable (line 7) and the “Start-VBRJob -Job” job name (line 41).  Line 28 defines the cutoff point for disk capacity – if the available disk space is less than 95% of the drive capacity, the backup aborts.  Line 1 is commented it out – I generally always start my scripts with # start notepad++ script.filename so that I can just cut and paste it into a command prompt to create my script file – that way as I on-board each new customer, their sites are configured the same as existing sites.  Again, it cuts down the chances of an error and makes it quicker for my guys to deploy new scripts.

**NOTE – the following deletes data from your backup cartridges. Use any tips, tricks, or scripts I post at your own risk.  I accept zero liability and responsibility if you use these scripts!!!**

Here is the command line to create the scheduled task to run as the logged in user at 10pm each weekday (it will prompt you for your password when you run it in a command prompt):

schtasks /create /tn "Weekday Veeam USB Drive Backup" /tr "\"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe\" -ExecutionPolicy RemoteSigned -noprofile -File  C:\Windows\Run_External_Drive_Veeam_Backup.ps1" /sc daily /st 22:00:00  /rp "*" /ru "%userdomain%\%username%"

Here is the Powershell script (updated 2017.01.17 to fix an error):

# start notepad++ C:\Windows\Run_External_Drive_Veeam_Backup.ps1

add-pssnapin Veeam*

$DnsDomain = Get-WmiObject -Class Win32_NTDomain -Filter "DSDirectoryServiceFlag='True'" | Select -ExpandProperty DnsForestName
$ThisComputerName = Get-WmiObject -Class Win32_ComputerSystem | Select -ExpandProperty Name
$SupportEmailAddress = "support@$DnsDomain"

# Get-WmiObject Win32_Volume | Where-Object {$_.DriveLetter -eq $null}
$drives = Get-WmiObject -Class win32_volume | Where-Object {$_.Label -match "WD MY BOOK"}
$i = 0
Foreach($drive in $drives)
{
#Set letter
$DriveLetter = "Z:"
Set-WmiInstance -input $drive -Arguments @{DriveLetter="$DriveLetter"}
$i++
}

if (Test-Path -path Z:\){
Get-ChildItem -Path "Z:\Veeam\Backups\All VMs - Z Drive" -Include *.vbm -recurse | foreach { $_.Delete()}
Get-ChildItem -Path "Z:\Veeam\Backups\All VMs - Z Drive" -Include *.vib -recurse | foreach { $_.Delete()}
Get-ChildItem -Path "Z:\Veeam\Backups\All VMs - Z Drive" -Include *.vbk -recurse | foreach { $_.Delete()}
Get-ChildItem -Path "Z:\Veeam\Backups\All VMs" -Include *.vbm -recurse | foreach { $_.Delete()}
Get-ChildItem -Path "Z:\Veeam\Backups\All VMs" -Include *.vib -recurse | foreach { $_.Delete()}
Get-ChildItem -Path "Z:\Veeam\Backups\All VMs" -Include *.vbk -recurse | foreach { $_.Delete()}
$externaldrivelabel = Get-WmiObject -Class win32_volume | Where-Object {$_.Label -match "WD MY BOOK"} | Select -ExpandProperty label
Get-WmiObject -Class win32_volume | Where-Object {$_.Label -match "WD MY BOOK"} | % { if (($_.FreeSpace/$_.Capacity) -le '0.95' )
{Send-MailMessage -From "$($ThisComputerName.ToUpper())@$($DnsDomain.ToUpper())" -To "$SupportEmailAddress" `
-Subject "Backup Aborted due to low space on $($externaldrivelabel)." `
-Body  "Aborting nightly Veeam backup to $($externaldrivelabel) (Z:) on $($ThisComputerName.ToUpper()).$($DnsDomain.ToUpper()) due to low disk space." `
-Priority High -DNO onSuccess, onFailure -SmtpServer "mail.$DnsDomain"
break}
}
}
if (Test-Path -path Z:\){
Send-MailMessage -From "$($ThisComputerName.ToUpper())@$($DnsDomain.ToUpper())" -To "$SupportEmailAddress" `
-Subject "Now starting nightly Veeam backup to $($externaldrivelabel)." `
-Body  "Now starting nightly Veeam backup to $($externaldrivelabel) (Z:) on $($ThisComputerName.ToUpper()).$($DnsDomain.ToUpper())." `
-Priority High -DNO onSuccess, onFailure -SmtpServer "mail.$DnsDomain"
Start-VBRJob -Job "All VMs - Z Drive" -FullBackup -RunAsync
break
}else{
Send-MailMessage -From "$($ThisComputerName.ToUpper())@$($DnsDomain.ToUpper())" -To "$SupportEmailAddress" `
-Subject "Error:  No external backup drive (Z:) found on $($ThisComputerName.ToUpper()).$($DnsDomain.ToUpper())!!!" `
-Body  "Error:  Cannot find or mount the external backup drive (Z:) on $($ThisComputerName.ToUpper()).$($DnsDomain.ToUpper())!!!  Now aborting nightly backup to external hard drive!!!" `
-Priority High -DNO onSuccess, onFailure -SmtpServer "mail.$DnsDomain"
break
}

As always – Use any tips, tricks, or scripts I post at your own risk.

HOWTO: Converting from BackupExec to #Veeam when using RDX drives

Ok – I’m completely done with Backup Exec when it comes to VMware.  I’ve been selling, supporting, certified on and even using Backup Exec for our own internal backups since it was Conner Backup Exec for Windows NT 3.1, way back in 1993.  Once upon a time, it was a great product – in fact it was the only product for backups that worked worth a damn.  But it’s reliability has dropped to nothing over the past 6 or 7 years.  Technical support has been off-shored and 99.9% of the time, if I am lucky enough to finally reach someone in technical support on the phone, I can’t understand a damn word they say due to their thick accent and shitty VOIP lines crossing the Pacific Ocean.  Today was the last straw with Backup Exec, their crappy bugs, and unreliable VMware backups.  So now it’s time to fully embrace the move to Veeam, which I’ve been considering for some time (note of disclosure – I am also a certified Veeam VMCE – v7, v8, & v9)

Several of my clients have single standalone ESXi hosts, an HPE StoreOnce appliance, a physical Windows Server 2012R2 with a RDX drive or two (for offline backups), and both Backup Exec and Veeam loaded on that Windows server.  Oh – and many, many, many RDX cartridges that have months of rotated backups on them that are all three quarters full.  I can’t just erase all these cartridges in one swoop and use them for Veeam backups.  And I certainly don’t want to have to log into the clients’ servers everyday to manually delete the old Backup Exec folders off the RDX (as they come up in rotation) so that there is enough room for the nightly Veeam backup.  And finally, even though I’m dumping Backup Exec for my VMware backups, I still need to use Backup Exec to backup the 2012R2 physical instance to the same RDX cartridge that Veeam is going to use (atleast until Veeam releases their next project).  So what do I do?

A little PowerShell scripting to the rescue – that is what I am going do!

After going through a sampling of several RDX cartridges at several different client sites, I’ve determined that when Backup Exec runs with GRT enabled it dumps those backed up VMs in IMGxxxxxx folders on the root of the RDX drive (including the VMDKs).  I also discovered (or at least in the environments that I’ve setup) that GRT enabled application backups (not VMs, but rather SQL, AD, Exchange) will also be in an IMG folder with either a file called ntds.dit or edb.chk, and sometimes both!  In my case, my 2012R2 server has SQL and AD on it, so I want to be careful not to delete IMG folders that potentially contain my SQL and AD backups (which could screw Backup Exec up even more than normal when it uses that cartridge again for the 2012R2 server).

In the end, I setup the RDX drive as a new rotated drive repository in Veeam (prior to this Veeam only backed up to the HPE StoreOnce).  I then create a new Veeam job that did active fulls to the RDX drive every night (with a restore points to keep of 1).  In the job’s Advanced Settings menu, I added a pre-run script that runs C:\Windows\Remove_BackupExec_IMG_Folders.cmd.  This script in turn launches a PowerShell script that deletes all the IMGxxxxxx folders off the RDX drive except IMG folders that contain either ntds.dit or edb.chk.

**NOTE – the following deletes data from your backup cartridges. Use any tips, tricks, or scripts I post at your own risk.  I accept zero liability and responsibility if you use these scripts!!!**

Here is the contents of my batch file.

rem start notepad++ "C:\Windows\Remove_BackupExec_IMG_Folders.cmd"
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""C:\Windows\Remove_BackupExec_IMG_Folders.ps1""' -Verb RunAs}"
exit/b

Here is the contents of the PowerShell script to remove the IMGxxxxxx folders (adjust the drive letter accordingly)

# start notepad++ "C:\Windows\Remove_BackupExec_IMG_Folders.ps1"
foreach ($i in Get-ChildItem R:\IMG*)
{if ((test-path "$i\ntds.dit") -eq $False -and (test-path "$i\edb.chk") -eq $False) {Remove-Item $i -force -recurse -confirm:$false}}

But wait! There is more!

Because I am still going to have to suffer with Backup Exec a while longer to backup my 2012R2 server, I need to make sure my nightly Backup Exec job doesn’t eject the RDX cartridge on me before Veeam finishes it’s RDX job.  To ensure this, I disabled the scheduled RDX jobs on my Backup Exec server.  Fortunately, Backup Exec includes a PowerShell module called BEMCLI.  So I wrote a second set of scripts as it was simply a matter of starting PowerShell from a script, importing the module, and starting the job.  So this time my scripts are a post-job script to start the Backup Exec job only after the Veeam job completes.

2017-01-11-16-33-43-snagit-0002

Here is the batch file to launch PowerShell.

rem start notepad++ "C:\Windows\Start_BE_UTIL01_RDX_JOB.cmd"
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""C:\Windows\Start_BE_UTIL01_RDX_JOB.ps1""' -Verb RunAs}"
exit/b

And here is the PowerShell script to start the Backup Exec job called “23:10 UTIL01 RDX-Full”.

# start notepad++ "C:\Windows\Start_BE_UTIL01_RDX_JOB.ps1"
Import-Module BEMCLI
Get-BEJob -Name "23:10 UTIL01 RDX-Full" | Start-BEJob -confirm:$false

Now when my Veeam backup job to RDX starts, it deletes all the IMGxxxxxx folders off the RDX drive (unless those folders contain either ntds.dit or edb.chk), and when it completes, it starts the remaining Backup Exec job, which ultimately ejects the RDX cartridge when it completes.

As always – Use any tips, tricks, or scripts I post at your own risk.