Restoring a decommissioned VM

My last post drew some healthy flak for the cheesiness; Those individuals may skip over to Cut the crap section
                                        So, its another not-so-good noon, sun blazing with vengeance, and I'm understandably uncomfortable with the adventure ride to workplace in a cramped up bus laced with the pungent sweat of my colleagues, infusing with equally stinky mine - The divine proportion has thus been created. My optimism has called in my nasal instincts to filter out just the cedar base notes from my cologne, out of this ensemble.
Stinks and sweat apart, I'm at the workplace for new challenges, dodging the usual checklist-documented routines.

Casually grazing my call queue, and I realize I've been regularly missing heed for a restore job that was passed onto me a fortnight ago. But why would a restore job be assigned to me unless its challenging like a full VM restore or sorts. The engineer who handled this initially has been so far successful into misconceiving the customer that a full VM restore is inevitable to restore db files from the server. Now that's called offloading.

Read More...























Skimmed Content:

Worst fears have taken some shape when I dwell further: Not only is this about a full VM restore but also that the server was decommissioned months ago.

The backup solution employed is a Veeam Backup & Replication v9.0; A sigh of relief as I'm more confident about Veeam(apparently should be read as Veem rather than VM) restoring the files as compared to any other backup solution. The only question here: Are those tapes readily available or should those be recovered from an offsite disaster-recovery vault.
Email exchanges and some network study have brought in some clarity into the picture:
  1. The decommissioned server was a windows server running on a 3-node vSphere 6.0 cluster with its storage mapped from 2 Dell Powervault MD3800i each with a storage capacity of 4.86 TB (non-SSD)
  2. They use a BDT FlexStor II tape Library and have fed all the tapes (they think are) required into this 1U-4slot appliance
  3.  And the data to be recovered are some DB files on the D drive of the server(or maybe on C:) - Customer isnt very sure on this part.

Starting off the day:

Try 1 - Instant VM Restore:
A simple next-next guided Instant VM restore and sorts are one of the main benefits of Veeam and other superlative backup suites.

Select a suitable restore point



 Once this process completes, the VM would have already spun up on the vSphere Cluster
The VM was successfully created, registered, and booted up, but console pointed otherwise.
 Diving further, the required boot disks or any disks for that matter weren't attached.
 I check the data-store to see only files to the tune of ~160MB.
Add caption
Trivia: Since this is a VM re-created by Instant VM recovery method, this still boots up from the storage mapped-to/used-by the Veeam backup solution.
"Waiting for user to start migration" throws super-tiny hints, but people generally tend to click on 'Finish' as soon as you see  or if the button isn't greyed-out.
If the disks have had been mapped and attached successfully, I've had used the option 'Migrate to Production' for the recovered VM under Instant Recovery to move the storage to the appropriate datastore mapped to vSphere

Try 2 - Opting Simplicity:
I start with the a simple VM restore from backups which shows me just 90 MB; So somethings not right again here.
VM to be restored isn't  available under vSphere infrastructure since it has been decommissioned

Selecting the VM 
Checking for more recovery points
And whichever Recovery point I select, and appropriate tape inserted(and cataloged), the total size remains negligible for a VM.

Try 3 - Some damn way:
Restoring the entire VM from tape gives me a bit more insight into this:

It now shows 0 KB rather than data ranging from 30-90 MB. This is technically more meaningful to me.
Again, whatever Restore point I select, it again comes up with 0 KB

Now, I've deduced that the disk backups never happened; To confirm the same, I tried restoring another VM: The disk showed an acceptable size, and restoration successfully booted up the VM. I kind of love Veeam for making a VM restore a near-no-brainer activity.(I still read it as VM rather than Veem)

Investigate the roots:

Pulled a report(another no-brainer) of backup jobs, and Hugo, we have it right there.

So, like I guessed, the VM was never backed-up. So, here's the root cause to update the customer but not mine.

Why was it never backed-up, but dint throw up any errors. The 'Success' in the status tab against this VM looks mean to me now.

Hey Google:
A simple google search brings me the root cause. Smart work at the end of the day.
The first result points me to the culprit.
Apparently, this is by design; VMware vStorage API does not allow the Veeam, or Veritas or any other backup application to backup the disks marked as independent.

Take Aways:

  1. Check for the existence of independent disks, don't just depend on the misleading 'Success' backup reports. I've come across a simple script by Andreas Lesslhumer. I've tested this on a vCenter 6 server, and this just works like a charm.
  2. Test restore your backups, so you identify any possible recovery issues/product limitations until its too late.

Some references:


Comments

Popular Posts