Cloud-Buddy Shrink your datacenter and move it around - Bilal Hashmi Tue, 03 Mar 2015 19:47:36 +0000 en-US hourly 1 Kemp’s free loadbalancer Tue, 03 Mar 2015 19:47:36 +0000 A few weeks back I reviewed Kemp’s LB for log Insight Manager. Today they have announced the availability of a free version of the LB they make. This covers a variety of different platforms. Of course if you are a large shop then the free version wouldn’t cut it for you but its definitely a good way to test the product out and be aware of its ability by running it in a lab. The differences in the free and paid versions are called out here. Enjoy!

]]> 0
Kemp LB for Log Insight Mgr Wed, 04 Feb 2015 19:02:50 +0000 Just recently I was playing around with log insight or as they now like to call it vRealize Log Insight. One of the new features with 2.5 is the ability to have to have an integrated load balancer. In previous versions VMware allowed for log insight worker nodes to scale out but this introduced an issue with evenly distributing the load. With 2.5, the claim is an external load balancer is no longer needed.

KEMP has been in the industry for some time and offers load balancer for all kinds of solutions. In fact it was one of the first vendors to ever make a virtual load balancers for VMware ESXi back when it was just ESX as well as other hypervisors. Some of their VMware specific load balancers can be found here. The one we are interested in is called the LoadMaster for VMware vCenter Log Insight Manager. I know this can be a mouthful. But its functionality is pretty straightforward with simple deployment and maintenance.

I don’t want to go into the details of how to deploy KEMP’s LB. We will be referring to it as the VLM (Virtual LoadMaster). It is an OVF that can be downloaded from the KEMP Website. Once the OVF is deployed it takes minutes for this bad boy to start working. Their deployment guide can be found here which covers their entire process of deployment . Hence, there is no point in me repeating the same information. However, I will point out a couple of things which might make your deployment a bit easier specially if load balancing is not something you don’t work with on the regular basis or if you are new to log insight:

  1. You will need at least 2 Log Insight nodes deployed (you can work with 1 but then what’s the value of a LB?)
  2. Do not enable the ILB (internal load balancer if you are using log insight 2.5)
  3. Do not forget to install the Log Insight Add On pack once you have deployed the VLM (section 2)

NOTE: The LoadMaster build that will be posted to in early February will include the Add On Pack by default.

  1. A virtual address is the IP of the service often referred to as a VIP. Basically this will be the address your clients will connect to.
  2. The real servers in this case will be your Log Insight nodes. Once the client connects to the virtual IP, the VLM will forward them to one of the real servers (Log Insight nodes) based on configured scheduling methods and health checks.

The image below that I borrowed from KEMP does a pretty good job in giving you an over of what the VLM does.


I don’t generally deal with load balancers in general so I felt it was important for me to clarify some of the above information. The good news is that I was able to deploy and make this work within minuets and if I can, anyone can. I deployed 2 Log Insight nodes and configured my virtual services in VLM as well as added the two nodes as the real servers.. After pointing my ESXi servers to the virtual IP addresses, VLM was put to work. 


Now here is the question. While it works smoothlyis a solution like KEMP which is an external load balancer even needed for those who have upgraded to log insight 2.5? I will go with the most obvious answer. It depends!

The Internal Load Balancer (ILB) in log insight 2.5 is a new feature. I am not going to comment on its realibilty, however, some customers prefer not be the ones to introduce brand new funcionality in their enviroment. My recommendation is, test it. Also, the following is straight from VMware that should be taken into consideration:

ILB requires that all Log Insight nodes be on the same Layer 2 network, such as behind the same switch or otherwise able to receive ARP requests from and send ARP requests to each other. The ILB IP address should be set up so that any Log Insight node can own it and receive traffic for it. Typically, this means that the ILB IP address will be in the same subnet as the physical address of the Log Insight nodes. After you configure the ILB IP address, try to ping it from a different network to ensure that it is reachable.

So who should be looking at KEMP or any other external load balancers for that matter?. In my opinion, external load balancers will work best for those who already have them deployed and can’t get similar results without it. Another aspect is division of responsibilities. As an example, if you have a network team that deals with LB, their preference will likely be to have an external LB in place versus the ILB provided by Log Insight 2.5. This by itself can be a pretty good reason, especially for larger organizations. BTW… While comparison of KEMP to other solutions is out of scope for this post KEMP does provide a handy sheet with a comparative matrix.

A couple of thing to consider regarding KEMP in particular is they have been doing this for some time and have a very mature product that is absolutely production ready. As your Log Insight nodes become unavailable KEMP stops directing traffic to them, and the easy of adding and removing nodes takes seconds.

“LoadMaster uses its L7 visibility to parse the flows on a per message basis and ensure even distribution across the cluster of available nodes, even when members are removed or added. LoadMaster executes health checks against the nodes to ensure that only healthy nodes are used as targets for messages. If a node becomes unhealthy and starts to fail health checks, it is automatically removed and re-added only when it returns to a healthy state.”

According to KEMP “LoadMaster is the only ADC available that comprehensively supports highly available traffic distribution for all supported Log Insight message ingestion methods.” A unique feature is that they are able to handle UDP traffic at L7 allowing per-syslog message load balancing. I highly recommend you to check out VLM if you employ Log Insight in your environment. Though the latest version of Log Insight Manager comes with ILB, KEMP can certainly enhance your infrastructure and provide a variety of features that may be of use including content switching, intrusion prevention, web application firewalling, global site load balancing, etc. You can get a 30 day trial and have it working in under 20 mins unless you have terrible download speeds. BTW… Since the LoadMaster is supported on vSphere as well as Hyper-V, KVM, Xen and Oracle VirtualBox along with a variety of public cloud platforms this could potentially be your one stop for all LB needs versus having a series of all kinds of LB for every environment creating support challenges. Take a look.

]]> 0
Boomerang to beam your VMs Thu, 15 Jan 2015 15:00:02 +0000 Recently I got an opportunity to take BoomerangTM for a spin. I wasn’t familiar with the product prior to this opportunity. Boomerang allows you to move your VMware vSphere workloads to Amazon AWS. It allows you to do the following:

  1. Move your load to AWS
  2. Use AWS as a DR or backup
  3. Bring your load back from AWS to your vSphere environment

Boomerang has a very simple deployment, perhaps which would explain the absence of countless PDFs on their website. The steps to get going are simple. You will need the following:

  1. A vSphere environment
  2. An AWS account
  3. Connectivity between 1 and 2

To get started, you will download an appliance which is right under 600MB. Once deployed, the appliance can be powered on. It will acquire an IP address if DHCP is enabled and publish that in the summary page of the appliance in the vSphere client. But if DHCP is not enabled, accessing the appliance becomes a little interesting. I found some details here. Luckily for me I have DHCP available and was simply able to hit the appliance acquired IP once it was up. The default username and password are both ‘admin’ and it would obviously make sense for you to change them to something else. When you provide the appliance with the default credentials, you will be asked to provide the license key and the email associated with it. This will also be a good opportunity for you to change your default password. With a few other choices to make you will be ready to rock and roll within seconds.

In the Boomerang world you have the option to create protection groups that serve as containers. These containers are made up of VMs. In order to create a protection group, you will need to provide the following:

  1. vSphere admin credentials
  2. vSphere IP/DNS
  3. AWS access key
  4. AWS secret key
  5. S3 Bucket name

Once you provide this information, you will be able to create a protection group. In my case I used a PHDVBA appliance I had laying around. My only reason for picking this versus others was its smaller disk footprint. It would have been relatively faster for me to move this to and from AWS considering my Internet connection is not the fastest.

Selecting the VMs you want to be added in your protection group is a breeze. You are provided with a list of VMs that are discovered in your environment. You will have the two views to choose from:

Protection Group

Protection Group

Just selecting the checkbox is required. One thing to note here is that in the folder view, you have the option to select individual disks along with the option to include any future disk as part of the protection group. This will ensure new disks are also replicated to AWS. Once the appropriate selection is made, you are now ready to start your replication.

Protection Group

My replication only consisted of a single VM. The initial replication as expected will take some time. There are a few settings to explore while your replication is in process. But you will likely get bored as I did because there aren’t very many options. They have kept things simple here and the tool does exactly what it’s advertised to do.


Once my replication ended without any events, the “Deploy Now” option becomes available. This will basically power on the VM in AWS.



When you try to deploy the load, you will be asked if the stack name needs to be changed and if existing or new VPC (AWS Virtual Private Cloud) will be used to run this load.


At this point hitting the “Deploy Now” button will begin the process of getting your uploaded data in your S3 bucket and converting that into a running instance inside AWS. This can take several minutes. Mine took about over an hour. Below are some of the logs from the Deploy job to give you an idea of what happens between Boomerang and AWS.

DeployedLogsOnce the VM is deployed the status is updated in the protected group details and dashboard.


DeployedAt this point you will be able to see the VM as a running instance in your EC2 environment based on the parameters you supplied during the deploy process.


At this point your VM has now left your existing environment and joined your VMs in the AWS world. You can now delete your original VMs and regain capacity locally. You can also bring the VM back into your datacenter from AWS which is exactly what we will do to simulate migration both ways.

To do this, you will simply have to click the “copyback now” button at the detail screen of the protected group. You will simply check the server you are interested in and move on to the next step.


The next screen will ask you for some basic information like the vCenter Data Center name, host name, data-store and virtual network information. Upon providing these, you can sit back and grab a cup of coffee as Boomerang brings your load back from AWS. The idea of cloud being a Hotel California like destination will no longer make sense to you.


You can watch the progress of the job in the protected group dashboard.


Again, because of what the copy-back does under the hood, this may take some time. Here is a little snapshot of some of the jobs that take place in the background when we request a copy-back.

Copyback logs

At the end of the copy process, your VM is registered in your vSphere environment and can be powered on. At this time you have moved your VM from your DC to AWS and back.

While this is a very simple and effective tool if you have a vSphere and an AWS environment or have vSphere and are thinking about leveraging AWS as well, I want to point out don’t expect it to make your applications just work. This tool does exactly what it’s marketed to do. It moves your load from vSphere to AWS and back. How your applications will perform and their compatibility, network requirements etc. are totally out of scope. Obviously that is something extremely important to consider before making that move. With that being said, you can copy your load to AWS using Boomerang while letting your production load run in your DC. Once you are satisfied with your testing in AWS perhaps then you can make the switch. If you already trust the power of the cloud you can leverage this tool for cloud-bursting opportunities that happen during your busy seasons. The product is licensed per VM so you pay for what you use.

Boomerang has a forum that will enable you to post questions if you run into any issues. Like I mentioned earlier, it’s a pretty straightforward deployment.

They have also made a pretty good video that does a good job in giving an overview of capabilities of their product:






]]> 0
Participate in Project VRC “State of the VDI and SBC union 2015″ survey Fri, 09 Jan 2015 14:21:22 +0000 The independent R&D project ‘Virtual Reality Check’ (VRC)  was started in early 2009 by Ruben Spruijt (@rspruijt) and Jeroen van de Kamp (@thejeroen) and focuses on research in the desktop and application virtualization market. Several white papers with Login VSI test results were published about the performance and best practices of different hypervisors, Microsoft Office versions, application virtualization solutions, Windows Operating Systems in server hosted desktop solutions and the impact of antivirus.

In 2013 and early 2014, Project VRC released the annual ‘State of the VDI and SBC union’ community survey (download for free). Over 1300 people participated. The results of this independent and truly unique survey have provided many new insights into the usage of desktop virtualization around the world.

This year Project VRC would like to repeat this survey to see how our industry has changed and to take a look at the future of Virtual Desktop Infrastructures and Server Based Computing in 2015. To do this they need your help again. Everyone who is involved in building or maintaining VDI or SBC environments is invited to participate in this survey. Also if you participated in the previous two editions.

The questions of this survey are both functional and technical and range from “What are the most important design goals set for this environment”, to “Which storage is used”, to “How are the VM’s configured”. The 2015 VRC survey will only take 10 minutes of your time.

The success of the survey will be determined by the amount of the responses, but also by the quality of these responses. This led Project VRC to the conclusion that they should stay away from giving away iPads or other price draws for survey participants. Instead, they opted for the following strategy: only survey participants will receive the exclusive overview report with all results immediately after the survey closes.

The survey will be closed February 15th this year. I really hope you want to participate and enjoy the official Project VRC “State of the VDI and SBC union 2015” survey!

Visit to fill out the Project Virtual Reality Check “State of the VDI and SBC Union 2014″ survey.

]]> 0
Good Ole P2V Wed, 05 Nov 2014 17:30:03 +0000 This year I havent had much opportunity to blog due to the type of work I have been involved in. However, as soon as I find the opportunity I grab it and this one happens to be about P2V. I recently had to use P2V after a long time for a POC I cant share details about at this time. Having not done a P2V in ages, it will be safe to say that my confidence was sort of crushed from some of my initial tests. Here are some of the things I learned.

If your source machine, your P2V server (where convertor is installed), your vCenter and your target hosts have any firewalls between them, please go to this this link and make sure all the communication channels are open.

In my tests there were firewalls in play and we opened the ports accordingly. Once the ports were open we did a P2V and everything seems to have worked after some issues we faced with SSL. Once the SSL was taken out of the equation everything seems to have worked just fine.  The machine was imported to the target vCenter and powered on with no issues. Seemed pretty standard. However our later tests kept failing which was quite frustrating. The job would fail after a few seconds of being submitted with the following error:

“An error occured while opening a virtual disk. Verify that the Converter server and the running source machines have a network access to the source and destination ESX/ESXi hosts.”

BTW the error above is my least favorite as it could really mean so many things. The key is always in the log files. After looking through the log files of our repeated failed attempts, we noticed something interesting in the worker log file:

2014-10-29T11:18:21.192-04:00 [03652 info ‘task-2′] Reusing existing VIM connection to
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: Get virtual disk filebacking [Cluster-1] PhysicalServer/PhysicalServer.vmdk
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: updating nfc port as 902
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: get protocol as vpxa-nfc
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: Get disklib file name as vpxa-nfc://[Cluster-1] PhysicalServer/PhysicalServer.vmdk@ESXi4:902!52 c3 66 97 b0 92 a0 93-38 cd b9 4a 17 f8 e0 00
2014-10-29T11:18:33.048-04:00 [03652 info ‘task-2′] Worker CloneTask updates, state: 4, percentage: 0, xfer rate (Bps): <unknown>
2014-10-29T11:18:33.048-04:00 [03652 info ‘task-2′] TargetVmManagerImpl::DeleteVM
2014-10-29T11:18:33.048-04:00 [03652 info ‘task-2′] Reusing existing VIM connection to
2014-10-29T11:18:33.064-04:00 [03652 info ‘task-2′] Destroying vim.VirtualMachine:vm-810174 on
2014-10-29T11:18:34.062-04:00 [03652 info ‘task-2′] WorkerConvertTask: Generating Agent Task bundle for task with id=”task-1″.
2014-10-29T11:18:44.467-04:00 [03652 info ‘task-2′] WorkerConvertTask: Retrieving agent task log bundle to “C:\Windows\TEMP\vmware-temp\vmware-SYSTEM\”.
2014-10-29T11:18:44.483-04:00 [03652 info ‘task-2′] WorkerConvertTask: Bundle successfully retrieved to “C:\Windows\TEMP\vmware-temp\vmware-SYSTEM\”.
2014-10-29T11:18:44.483-04:00 [03652 error ‘Default’] Task failed:

One thing that stood out was the hostname that I have marked above in red. We were exporting the physical machine to a host in Cluster-1. When going through the wizard it allows you to select an individual host inside a cluster however that doesn’t really mean much. Our cluster was an 8 node cluster and we noticed in every failed attempt the job was trying to pick any of the 8 hosts in the cluster and not the one we selected (ESXi8). Also, right before submitting the job we noticed that though a particular host was selected as the target, at the summary page the target location was the cluster and not the individual host.

So remember I mentioned we have firewalls between our hosts, vcenter, source machine and P2V server? When we opened the ports we figured that we will only need access to a single ESXi host as the P2V wizard was allowing us to select an individual host in the cluster. As it turns out because this host was in a cluster all hosts in that cluster will need the communication channel open as it was evident through the worker log file. Once we made that change, the P2V worked like a champ over and over. Another option would have been to take our target host out of the cluster.

So, how did we get it to work the first time? I blame that on luck. We went back to the log files and confirmed the first time it actually worked, it happened to have selected the host we opened the ports for. I should have bought the lottery ticket instead :(.  Luckily we tested more than once and were able to confirm all that needs to be in place for this to work. I may have known this sometime ago but being away from P2V for sometime it was refreshing, challenging and rewarding to finally get it working. Convertor version was 5.5.1 build-1682692.

Lesson learned? Do lots of testing.

]]> 0
PHD Virtual Backup (v 8) Mon, 07 Jul 2014 14:54:39 +0000 Recently I got an opportunity to play around with Unitrends backup solution, which is still in beta. For those of who are not aware, PHD Virtual is now part of Unitrends. At first I was a bit skeptical because I have seen products being wasted after an acquisition. But in this case I think I am quite happy with the progress the product has made under the new name.

My goal is to share my views on my testing of the product and also try and explain the new architecture. Once you get a hang of it, its really simple and makes a lot of sense. In my initial attempt, I sort of screwed things up but that was mostly because I was being hasty and didn’t bother reading the few simple concepts.

Like in previous version, the one we will discuss here, version 8 revolves around the good ole VBA. However now the VBA does a lot more and has delivered on the promises in the past. For example: my previous complain was lack of single pane of glass to manage multiple VBAs, that’s no longer a problem here. In fact they went a step further and made it possible to manage VBAs that could be deployed in Citrix or even Hyper V environments.


In version 7, the concept of appliance roles was introduced and version 8 continues with this model. Each appliance can be dedicated to a single role or a single appliance can have multiple roles configured. So what are those roles?

Presentation (P) – The Presentation appliance is the appliance running the web-based interface you use to configure and manage your installation. Only one presentation appliance is necessary per installation, across all configured environments (including across all hypervisor types). All management and configuration of PHD Virtual Backup occurs through the Presentation Appliance’s web interface.
Management (M) – Each Environment requires one appliance designated as the Management Appliance. This appliance performs inventory and other hypervisor-specific tasks and manages the work of the Engine appliances. Each environment you add to your PHDVB deployment requires the IP address of one appliance to act as Management appliance. The Presentation Appliance can also be designated as a Management Appliance.


Engine (E) – Engine appliances perform the actual data processing and send data to their configured data stores. Engine is the most common role an appliance will take on in your deployment. Appliances with the Presentation and Management role can also be configured with the Engine appliance.


Note: As a general recommendation, you will need at least one Engine appliance for every 10 TB of source data you will protect (or every 1 TB of data if using XenServer or 5 TB is using CIFS backup storage).


So for any deployment you will need at least one VBA that has the P M and E roles. If the environment is small enough, a single VBA can host all those roles. And then add more engines as the environment grows. You can tell someone was thinking scaling. The latest version of the beta also incorporates a tutorial video that gives you a high level view of what the VBA roles may look like when laid down. This will also assist you in determining what will be ideal for your environment. I highly recommend you watch this before moving forward with the full configuration.


I am stealing a diagram from Unitrends documentation to explain how the roles can be laid out.

Deployment single hypervisor

In the example above, as you can tell three VBAs are deployed. It’s a very small two host environment but it maybe a good idea to isolate the engine for the P and M roles if expansions are anticipated down the road. Technically you could with a single engine unless you needed different kinds of backup datastores to backup the data. Yes the engine does NFS, CIFS and attached storage, however you can only have one type of storage per engine. So in the example above, you could assume maybe VBA2 uses NFS and VBA3 uses attached storage. Its also possible that the hosts are running really large VMs that exceed the 10TB limit per engine and that’s why two VBA are deployed.
What could be another reason for two VBAs in the example above? Is it possible that host1 is running esxi and host2 is running Hyper-V? I would say YES its possible to manage two different environments using the same portal but there is one important piece that we are missing, the engine. Take a look at the diagram below:

Deployment multi hypervisor

The diagram above is identical to the one before it with one key difference. Host 2 also has a manager VBA which means the example above could very well work for environment 1 being vSphere based and environment 2 being Hyper V or even Citrix based. Isn’t that awesome!?

Your engine is attached to a disk where the backups are stored. When attaching the storage, you can set encryption to be on if helps you sleep better or some compliance has that requirement. You can also set the compression rate of the backup and block size. Below is some useful information to make those assessments.

Backup Compression:
• Use BDS Setting – The default setting, this option will use the compression setting applied to 
the backup data store. The default setting at the backup data store is GZIP.
• Uncompressed – No compression is applied to the backup data. This can be useful (or required) when using storage devices that perform additional compression.
GZIP – This compression method performs the highest compression level resulting in reduced amount of storage space used. The additional compression used by GZIP will impact performance – backup speed and overall backup time.
• LZOP – This method results in less compression than GZIP but in most cases will impact performance much less, resulting in compressed backups faster than the GZIP alternative.
Block Size:
• Use BDS Setting – The default setting, this option will use the block size setting applied to the 
backup data store. The default setting at the backup data store is 1 MB.
• 512 KB, 1 MB, 2 MB, 4 MB – The size of the backup block used to store backup data should not be adjusted in most situations unless instructed to by support. Adjusting the block size can impact backup and restore performance.

My wow moment:

The deployment is really simple. If you can deploy an OVF you are gold as long as you understand the architecture. Because I didn’t in the beginning I managed to screw up my initial deployment. The folks over at Unitrends were kind enough to get on a call with me to take a look at my issues. On first glance I was informed that I need to update my appliances just like any good ole vendor would suggest. My eyes started to roll until I found out how simple that really is.




The folks over at Unitrends sent me a link to a file which took a few seconds, I unzipped it went to the config tab of the portal on my VBA, hit upload, selected the file with the .phd extension and hot save. And that was it. So I figured if I only have to do that to each VBA it shouldn’t be too bad. But I was informed, because I did it from the presentation VBA that knows the managers and the engines in my environment, all the appliances will be updated and I will get a message once that completes. Sure enough 10 mins later I got the message and as I hit OK, all the branding had changed from PHD to Unitrends and all my appliances were updated. Way better than my original plan of redeploying with the latest appliance. I know this doesn’t highlight the tools backup abilities, but I think the tool itself has proven it can do backups and more over the years. But to me the manageability of your backup solution is key.

One key thing to note is there is also an auto update feature that will pretty much negate the need for you to update the appliance at all as it will happen on its own.

The portal and UI:

When you login to the portal you get a holistic view of your entire environment. This includes number of VMs, number of VMs that have been protected with backups, recent restores, jobs, alerts, available storage etc all in one place. Don’t forget you could be pulling this information from your entire environment that could be hypervisor agnostic. I think this dashboard brings good value to the table and gives a good high level on things. You can hit the cog on the right to change the frequency of how often this data is updated, the default is 5sec.

Portal UI


The errors and some alerts are really my doing. I didn’t realize it was a bad idea to try and backup 400gb of data on a 40gb backup volume. But I was quickly able to add more storage and get past that.

The rest of the tabs are pretty self-explanatory. The protect tab allows you backup, replicate VMs, the Recover tab exactly what it says. You can do full restores or even File level restores. One of the things I like is the ability to change he mac on the VM when doing a full restore, its good to see features from the past are still baked in. The Jobs tab lets you create new jobs, monitor them and view recent jobs.




I really like the report tabs because it presents all the useful information you can think of right out of the box in a very pretty way. Way prettier than I can ever make it because I just don’t have that kind of patience or talent. There are reports around VMs, virtual appliances (this tells you the version your VBAs are running), replication, archives and storage. So there should be all kinds of reports for you to pass around in meetings not just with pretty colors but also very useful information.

The Configure tab is where all the magic happens. This is where you add the multiple VBAs and assign them the appropriate roles. This is also where you can do the update all my VBA magic with a single click. Your SMTP info for email alerting and also the credentials for FLR are stored here.

Lastly another thing that’s covered in this tab brings me to my next topic.


The solution gets installed with a trial license at first. And once you realize you kinda like this, they want money. Urgghh those evil people. Jokes aside, this is also a key factor for a lot of organization on how much will it cost. I don’t have the pricing information at this time. However I do know the product is licensed on a per socket basis. Meaning the sockets of the hosts. So if you have 10 hosts with 4 sockets each that you need to protect all the VMs on you will need 10*4 = 40 licenses. Hopefully the cost will be acceptable like in the past.


Overall I really like the solution. I didn’t cover a lot of the actual backup technology used in the back because most of that has been covered by others and myself in the past. I will post a few links to my previous reviews. I wanted to point out all the new things that I saw and wanted to go over the architecture so that you understand what will it take to deploy this product. If you have half a working brain like myself, all you will need is this post and access to a vCenter where you can deploy the OVF. And you will feel like an all-important backup guru like myself (that’s a joke, I am anything but). Just make sure you watch the tutorial video once you eploy the first VBA, that will really make things extremely simple for you.

The one thing I really like was the way appliances are updated and I think I made that quite clear when I was on the phone with Unitrends. Like everything else there are things that one can desire. Though this is kinda hard to come up with at this point, but I think instead of deploying the VBAs from vCenter, it would be really nice if only the first deployment is needed from vCenter and the rest could be handled from within the solutions portal. Perhaps there could be a report that suggests adding more engines etc to improve performance maybe. The data is already there it seems like, just need to tie it all together. In the end, good job PHD virtual you have done great. And Unitrends don’t let us down because we are spoiled.

Previous reviews:
CloudHook will hook you up – PHD Virtual 6.2

How reliable is your DR plan?

]]> 0
New Toy from PHD – RTA Caluclator Sat, 30 Nov 2013 10:13:57 +0000 PHD Virtual recently released a free tool called the Recovery Time Calculator that enables the calculation of time it will take to recover virtual machines and critical applications in the event of an outage or disaster. Did I mention its FREE? Below is the press release.

Dubbed the RTA Calculator, for the ‘Recovery Time Actual’ estimate it provides, PHD’s free tool can be easily downloaded and then immediately provides visibility into what your organization’s actual VM recovery time would be in the event of an outage.

The RTA Calculator has a built-in wizard to connect to VMware. Once installed you are prompted to select the VMs you wish to time for an RTA estimate, and set the appropriate boot order. The RTA Calculator will then take a snapshot and create linked clones for each VM. Due to the use of snapshotting and linked clones, the VM creation process is very quick. The tool then simply powers up the VMs and times the process, calculating the total time it will take to recover that grouping of VMs – it’s that simple! This gives you an accurate Recovery Time Actual you can use to compare to your Recovery Time Objective and determine if you’ll be able to adhere to your SLAs.

Run the RTA tool as often as needed to produce an estimate with different production loads.

What You’ll Need

System Requirements

Like all PHD Virtual products, the RTA Calculator is highly effective while still maintaining ease of use. It requires no training or product documentation. All you need to know is contained within in this short video demonstration

Other than that just ensure you meet the following 3 requirements

  • The RTA Calculator is a Windows application that requires an Administrator account and .Net 4.0.
  • The RTA Calculator supports VMware ESX or ESXi with vCenter 4.0 (or higher) with default ports.
  • The RTA Calculator will need the VMware guest tools installed.

Download the FREE tool here:

]]> 0
CloudPhysics and Admission Control tunning Fri, 06 Sep 2013 21:46:27 +0000 A while back I made a little demo video that showcased one of CloudPhysics cards that is still my personal favorite. I figured it would be a good idea to share it in case there is anyone out there who hasn’t taken CloudPhysics for a spin yet.


]]> 0
Recovery Management Suite Tue, 13 Aug 2013 14:44:14 +0000 My last few posts have been about backups and DR and I have intensively covered PHD Virtual’s products and their capabilities. I have covered PHD Virtual backup, CloudHook (my review) and  just recently I also covered ReliableDR (my review). Just recently PHD virtual has released their Recovery Management Suite (RMS) that ties all these products together and delivers an extremely powerful solution that ranges from simple backups to a whole site failure (without making it an extremely complex undertaking).



So why is this such a big deal. There are various products out there with similar capabilities. And that is true. However I cant think of a single product that does all that RMS offers and still manages to keep it simple. For example lets take a look at SRM (not trying to talk trash but simply using it for comparison). The business centric view of ReliableDR, DR automation and the ease of use in a multi tenant environment for cloud providers sets this product in a league of its own. Now if we throw the other two products into the mix (Virtual backup and CloudHook), PHD Virtual starts to sound like a real complete solution. And it truly is. Did I point out that PHD virtual has its own replication mechanism that can be used to replicate date between sites?

I really like SRM too. However being a technologist I like to compare products and see where one serves a better purpose. Lets put SRM in the back seat for now and lets call on Veeam for change and see how some of its features stand against RMS. I am a big fan of Veeam backup and replication also. However there is some real planning that goes into setting up Veeam’s backup manager, proxies and repositories. In the case of PHDVB things are pretty simple. And once you tie in cloud storage things get even better. And just like Veeam PHDVB also offers support for multiple hypervisor, but the ReliableDR portion is what puts PHD Virtual’s products miles ahead of Veeam in my opinion.

For the first time, companies of all sizes have a unified, affordable solution that automates and assures recovery processes in order to reduce risk, decrease recovery times, and help sustain the business throughout any issues that might occur. PHD Virtual RMS is the only solution to offer comprehensive, integrated recovery that addresses the entire recovery continuum. IT provides unified data protection and disaster recovery capabilities delivered through integrated backup, replication, recovery and DR orchestration that is powerful, scalable, easy to use, and delivers immediate value “out-of-the-box”


Obviously nothing is life is perfect and so is the case with RMS and its components. There is room for improvement and I am more than certain that the improvement will come before you know it. I am saying this from experience. PHD Virtual has made tremendous amounts of changes and enhancements to their products over the last few years that could only compare to a handful of organizations. What may seem like the next logical step actually does happen at PHD Virtual. And I hope they maintain that trend. As a matter of fact, some of what I thought should have been included in the future releases of the products also made it. For example I wondered why all the PHD products weren’t being offered as a single solution that worked together like one.. tadaaaaa! So there, an organization that can keep a character like myself pretty satisfied has a lot to offer in years to come.

Some of the latest enhancements coming in the upcoming versions of RMS are as follow:

  • Automates the replication of backup data from a source Backup Data Store (BDS) to a remote BDS (including cloud storage).
  • Only changed data is moved from source BDS to archive BDS (bandwidth saver and more importantly data the cloud will not move from production, it will likely move from your primary BDS, hence not even touching your production load).
  • The archive BDS can have different retention policies than the source (I complained about this).
  • You can configure individual VMs to be replicated, or synch the entire BDS with the archive location.
  • You can configure multiple source BDS locations to be archived to a single archive BDS location.
  • The archive BDS supports global deduplication across all source BDS data
  • Configure a generic S3 backup target to leverage additional object storage platforms, other than Amazon, Rackspace, and Google (Yoooohooo! more options)
  • Certified Replica – Automating the testing of virtualized applications that have been replicated using PHD Virtual Backup
    • A new CertifiedReplica job exists in ReliableDR for configuration of recovery jobs against PHD VM replicas
    • These jobs can be used to automate complex recovery specifications for failover and testing of PHD VM replicas. What kinds of complex jobs? For example:
      • Boot orders
      • Network mapping for test and failover
      • Re-IP for Windows AND Linux (Unlike Veeam here you can re-IP linux machines :))
      • Application testing and other recovery tasks
    • Verification can be scheduled at intervals as frequently as every 4 hours
    • Hypervisor snapshots are used for CertifiedReplica recovery points and other testing/failover tasks, making the entire process storage agnostic

And lastly some rumors. I have heard that ‘Certified Backups’ are also in the works. Yes machines are taking over! Look at what certified backups have to offer if it makes it through.

  • Automating the testing of VMware virtualized applications directly from backups using Instant Recovery from PHD Virtual Backup
    • A new CertifiedBackup job that initiates Instant Recovery jobs within PHD Virtual Backup and automates boot orders, network reconfiguration, and other recovery tasks
    • These jobs can be used to automate VM recovery specifications for testing of PHD backups
    • When testing is complete, Instant Recovery sessions are terminated automatically
    • Verification can be scheduled at intervals as frequently as every 4 hours

When you look at all the capabilities tied into a single product, you have no choice but to give it a go. Like I mentioned earlier I like other products too but if you are looking for a single solution that addresses your backup and DR needs, you have to give RMS a go. It will be silly to overlook it. I hope VMware Veeam and other vendors try and come up with their versions of RMS as well (though it will be interesting to see VMware come up with a solution that supports other hypervisors also).

I have been asked a couple of times in the past if ReliableDR supports vCloud Director and the answer is YES, it does. Try RMS for free today.

]]> 2
DR Benchmark Tue, 16 Jul 2013 20:15:29 +0000 There is an ever growing need for having some kind of a business continuity \ disaster recovery plan that keeps the ball rolling when the unexpected happens. There is an enormous amount of data that is being collected everyday and businesses can’t afford to lose it. It’s valuable. It’s not only important from a business perspective as it aids their purpose and mission but I also believe that the data being captured today marks a new way of recording history like it never has in the past. And, we are the generation that is making it possible. As it is our biggest contribution in spreading the wealth of knowledge and information, it becomes extremely important to protect it.  For those reasons I believe disaster recovery should be an integral part of any implementation that is made in today’s day and age.

Now the last few years have been a lot of fun for any techie. So much has changed just in the last 4-5 years. So much is changing today and the near future promises a lot more changes. Of course most will be good and some will be made fun of down the road. Due to all the changes that have come about in the datacenter over the past few years there has been a paradigm shift. For example, shops that required change windows to restart a machine are now moving their machines from host to host (vMotion) during production hours and in most cases letting the computer figure that out on its own (DRS). We are truly in the early stages of a robo datacenter where self-healing processes are being implemented (VMware HA, MSCS, WFC, FT etc).

Of course, we are definitely far from our goals of what the datacenter of the future is supposed to look like. But we are well on our way there. However, one aspect of the datacenter that seems to take a major hit from all the advancements that have come about in the last few years is the all important disaster recovery. I look at DR as insurance, you will need it when you don’t have it and when you have it, it may seem like an unnecessary overhead. But I am sure we have all been in situations where we wished our DR plan was more current, tested, more elaborate and covered all aspects of the business. If you are one of those ‘lucky’ folks who wished they had some kind of a DR plan, join the party of many more like you. You are not alone.

With the tremendous amount of change that has come about in the last few years, the DR side has experienced a bit of a stepchild like treatment. It has become a back burner project in a lot of organizations where it should really be an integral part of the overall solution. With all the technological advancements that have been made recently, a lot of good DR solutions (ReliableDRSRM, Veeam Backup & Replication) have also surfaced that could help one create a DR plan like never before. But lets keep the specific tools and vendors out of this for now. Lets just talk about what are the good DR practices for today and the future? What is it that others are doing that you may not be? Where is the gap in your DR plan? Are stretched datacenters really the answer to your problems? Can HA and DR be tied together? How well do you score when compared to your peers in the industry for DR preparedness? Where can you go to get this type of information?


Your prayers may have been answered.  Recently a group of people including myself have formed a council (Disaster Recovery Preparedness Benchmark) that aims to increase DR preparedness awareness and improve overall DR practices. These are not sales people who are trying to sell you stuff. These are people who work in the field like you and I and have a day job. Their jobs help them bring the diversity to the council and share the knowledge and experience they have gained over the years. The idea is all of that put together will help develop some standards and best practices for the industry to follow and bring some kind of method and calmness to the awesome madness we have witnessed over the last few years. You can take a look at it here. As everything else, we are starting off with a survey that will help you evaluate your current situation. This data will also help bring in more information that will benefit the overall cause. Don’t worry personally identifiable information is only collected when you voluntarily provide it. Start playing your role in taming the DR animal, protect your investment and take your survey here.

]]> 1
PHD giveaway Fri, 28 Jun 2013 16:49:06 +0000 I have been keeping my eye on PHD Virtual for sometime now as their tools tend to make me feel smarter than I really am. It turns out they are having a free give away where you could potentially win a 1 year free license for either their VMware/Citrix backup solution or even their Reliable DR product that I reviewed just a few weeks ago.

So what do you have to do to win? Simply download their Virtual backup tool for VMware/Citrix or their Reliable DR product, from there on you are automatically entered to win. Here is the post that may have more information. Good luck!

]]> 0
How reliable is your DR plan? Wed, 05 Jun 2013 15:03:17 +0000 Over the past few years or so there have been a lot of new solutions that have surfaced. There have been new ways of solving new problems and even ways to solve old problems more efficiently. I have been impressed by the simplicity that PHD Virtual brings to their backup solution, so when I first heard about “ReliableDR”, I had no reason not a take peek.

PHD Virtual acquired VirtualSharp Software earlier this year. VirtualSharp had made some great strides to address the complex problem of disaster recovery. I know we all want a disaster recovery solution and SRM is a great product. Stop laughing! Ok it’s a great product but perhaps not so great to use with all its complexities and nuances. At the end of the day a great DR solution should ask us what we want our RPO and RTO to be and orchestrate a solution based on that. It should not be the other way around. I believe ReliableDR is definitely a step in that direction. Don’t get me wrong, I am not hating on SRM here, I am simply stating the ReliableDR does that in a very simple manner. It’s a tool that doesn’t require an extensive amount of research or consulting hours.

Cloud has definitely been the buzz word for some time now. And automation is key with any decent cloud service. That’s really what makes the magic happen. When you think about it, we have automation all around us. Do we not let our VMs float around from host to host (DRS) and from datastore to datastore (SDRS)? Do we not let the packets flow around in our switches and let them figure out what’s works best for them? Do we not let load balancers determine what will be the best way to handle the load at any given time? Why not let automation come in to creating a DR plan as well? Now there would still be some manual work involved, but you will definitely not be starting from scratch and will be using a lot of information that already exists. I think that brings in a lot of value. That’s not only in cost savings for implementing a DR solution but also a tremendous amount of cost savings in a solution that will actually work.

Some of the key functionality that the product offers:

  • Automated, Continuous, Service-Oriented DR Testing – Maintains the integrity of you DR plan by being service / application centric, not data centric.  It takes a business-centric view of an application and its dependencies and then automates the verification of those applications as many as several times per day.  The typical DR plan is tested 1-2 times per year.  You can test several hundreds or thousands of times per year with ReliableDR!

  • Application-Aware Testing – Measuring of accurate Recovery Time Actuals (RTOs & RPOs)

  • Certified Recovery Points – automatically storing multiple certified recovery points

  • Compliance Reporting – demonstrates DR objective compliance to auditors

  • Test, Failover, and Failback – Automation of failover and failback processes

  • Flexible Replication Options – Integration with all major storage vendors, multiple software based replication solutions including PHD Virtual, and also includes its own zero-footprint software-based replication capabilities

So then comes the million dollar question. If it really does all that and its really simple to setup, then it must be really expensive. So how much does it cost?  I am happy to announce that you could even get it for free. Obviously that means with limited functionality.  So below are the options that one has for the product:

  1. ReliableDR Enterprise Edition
  2. ReliableDR Foundation Edition
  3. FREE

Take a look at this link here for more details in comparing the three options.

My impression of RealibleDR is that it’s great for SMB. However this does not mean that it’s not suitable for large environments. After all they have an Enterprise edition for a reason. The reason I want to focus on SMB as being the target market for this product is the simplicity of the tool without compromising functionality. Obviously this does not mean that SMB’s can’t handle complex tools they certainly can, however more often than not I find SMB shops full of the IT staff that are overworked and understaffed. This makes them a jack of all trades but leaves them with very little time to dedicate and focus in just one area. Such an environment may never be able to get past some of the complex DR solutions that are out there in the market today. For an environment like that ReliableDR will do wonders. Or like Steve Jobs once said it, it will be like a cold glass of water in hell :D.

I believe large environments will also benefit from the automation of DR this product offers, its application awareness, scalability and last but not the least, its business centric view. The ever changing datacenters require a DR solution that adapts with them and understands their dynamic nature. I think just all those things alone combined with simplicity and low cost should make any engineer click this link right now. In times of disaster one needs a tool that works every single time and RelaibleDR offers a very simple way to test the failover and failback without making them an expensive exercise. Moreover, most DR solutions tend to get out of date due to the difference in time between each exercise. With a tool like ReliableDR these exercises can be arranged more frequently and perhaps when there is a true disaster there will be calmness in the datacenters versus the headless chicken dance we often experience.

It makes sense for PHD virtual to invest their simplistic approach to the DR solutions. They have done a great job in the enhancements they have made to their backup solution over the years and ReliableDR seems to be the next logical step. I was scared that I would find complex settings and installation procedures before getting the product going. But I was pleasantly surprised. Here are some webinars that would help one understand the product in more detail.

Imagine if we had a slider that we adjusted to dictate the amount of money we want to spend on our power bill every month and our homes adjusted to that number and aligned the utilization accordingly. That’s the level of simplicity that’s needed in the DR solutions for today. And I believe ReliableDR is definitely on its way there.


]]> 2
Vote for sessions at VMworld 2013 Tue, 23 Apr 2013 20:41:07 +0000 As you may already know, the VMworld sessions are a result of a rigorous process that starts very early. Before a session is added, its approved, opened for public voting and after going through several processes it becomes part of VMworld. This year I am fortunate enough to have been submitted for two sessions at VMworld. The sessions are now open for public voting and I will appreciate your vote in getting my sessions approved and having the opportunity to speak at VMworld.

You can go to VMworld’s website and cast your vote. The public voting opened earlier today. Once there, simply filter the sessions and type in the session IDs provided below in the keyword fields. That should speed up things. However, its always a good idea to look for other sessions that may interest you. The important thing is for you to cast your vote. Of course, if that vote is for one of my sessions thats even better. You will need a VMworld ID in order to vote. You can also enter my name “bilal hashmi” in the keywords field to display both sessions like the screenshot below. From here onwards, its a matter of simply clicking the thumbs up. If its green like mine that means you voted. Yes, I voted for myself :D

My sessions:

vCenter: The Unsung Hero (Protecting the Core) – Session ID 4873

This session will be regarding the challenges we face with vCenter in 2013. This will cover topics like the importance of keeping vCenter up at all times. With new dependancies in the stack that rely heavily on vCenter to be available at all times, it has become challenging to keep all peices of vCenter running at all times. As we all know we now have more moving parts in vCenter. This session will cover the gotachs, and how you can secure your vCenter in order to keep the dependent services up at all times. I will be co-presenting this with a fellow vExpert James Bowling @vSential. Please vote for this session 4873.

Reports, Mashups and Analytics for your Datacenter – Session ID 5852

The other session is among my favorite topics to talk about. And what better place to talk about it than VMworld. Its the topic of reporting and analytics for your datacenter. We have found ourselves busy with a variety of different techniques to retrive the all importnat reports. Some of us get to use expensive tools that simply dont deliver or are too complicated to use. We also have the brave ones among us who would put powerCLI to work, and of course the good ole excel spreadsheets are never too far from a reporting discusion. We will be going over a different approach on retrieving critical information across your environment. This is not the one to miss. We will be unveiling some new capabilities of CloudPhysics for reports, mashups and analytics for your Datacenter. I plan to co-present this with another fellow vExpert Anthony Spiteri @anthonyspiteri. Please vote for this session 5852.

Obviously there are other great sessions that one should probably vote for as well. My suggestions are below. I think these are all great topics and would definitely be great addition to the VMworld this year. Good luck and I hope to see you all later this year.

5852 Reports, Mashups and Analytics for your Datacenter Bilal Hashmi, Verizon Business

Anthony Spiteri, Anittel

4873 vCenter: The Unsung Hero (Protecting the Core)  James Bowling, iland

Bilal Hashmi, Verizon Business

5778 Solid State of Affairs: The Benefits and Challenges of SSDs in… Steve Vu, ESRI

Irfan Ahmad, CloudPhysics

5818 How I maximized my vCloud ROI with vSphere Clustering Parag Mehta, Equinix

Jorge Pazos, City of Melrose

5854 Software Defined Data Centers  – Big Data Problem or Opportunity? Ariel Antigua, Universidad APEC

Bob Plankers, Univ of Wisconsin

5900 Flight Simulator for the VMware Software Defined Datacenter Michael Ryom, Statens It, Denmark

Maish Saidel-Keesing, Cisco

5892 The Decisive Admin:  How to make better choices operating and designing vSphere infrastructure John Blumenthal, CloudPhysics

Drew Henning, HDR Inc.

5859 Storage and CPU Noisy Neighbor Issues: Troubleshooting and Best Practices Krishna Raja, CloudPhysics

Maish Saidel-Keesing, Cisco

5823 You are not alone: community intelligence in a software defined future vSphere infrastructure Panel:Trevor Pott, eGeek Consulting

Bob Plankers, Univ of Wisconsin

Josh Folland (Moderator)

4872 Operating and Architecting a vMSC based infrastructure Duncan Epping, VMware

Lee Dilworth, VMware

4570 Ask the Expert VCDX’s Panel:Rick Scherer, EMC

Matt Cowger, EMC

Chris Colotti, VMware

Duncan Epping, VMware

Jason Nash, Varrow




]]> 2
More training than you can ask for Wed, 13 Mar 2013 17:20:47 +0000 Most of you must already be aware of what Trainsignal is. They have come up with all kinds of training videos in the past few years that have covered not just VMware topics, but topics around other competing and even complimenting technologies. This covers, Citrix, Microsoft Cisco to name a few.

It’s one thing to have a long list of training video’s but Trainsignal has really put out some top quality material. Only now they have made all this very very affordable. For only $49/month or if you prefer anual subscription thats only $468/year for all the training videos they offer. What are those courses? Here is a list.

Of course, you can stream all their videos online which I have been able to do in the past with no hiccups on my laptop and even my ipad. What they also have is offline viewing where you may download a video and view it offline like on a long flight perhaps. The only catch is for now the offline viewing is limited to windows and OS X platform only due to the silverlight dependency. Hopefully that functionality will come to mobile devices soon. Go ahead and sign up for more training than you will ever find the time for.

]]> 0
CloudHook will hook you up – PHD Virtual 6.2 Thu, 07 Mar 2013 14:19:25 +0000 A few months ago I reviewed the backup tool from PHD Virtual for VMware. Yes, I specifically mentioned VMware because they have one for Citrix also. The company is on the verge of releasing their next major update 6.2 and I got an opportunity to do a sneak peak.

Last year I was completely new to PHD Virtual and had figure out all the moving parts, which weren’t many to be honest. The post from my last review is here. The one thing I loved about the product was its simplicity where I really didn’t have to sit down and figure out what was needed in order for it to work. I just followed the screen instruction and taadaaa!.. it worked.  So, I have huge expectations this year and won’t be going into the details of how to setup and install.


The overall procedure to do the install is still pretty much the same. You are working with two pieces here.

  1. VBA (appliance)
  2. Console (windows installer)

In order to get the backup tool going, you will install the console on a windows machine and deploy the VBA appliance. From that point some initial configurations are needed and you are all set to go. Configuration like the IP address for the VBA, storage to be used for backup, email notification, retention polices, write space etc.. What is write space? Good question, hold your horses.

You will also create your backup jobs here and kick off a manual or schedule a backup job.  All of this is pretty basic and I don’t plan on going into the details of how to complete these tasks. Once you have the VBA deployed, you will realize you don’t need anyone to blog about the how to steps. And if you really need that, click on the “question mark” on the top right of the console window and you will have access to all the help you will ever need. 


The documentation is embedded within the console and its pretty good and detailed.

So what’s the big deal?

When I reviewed this tool last year there were a few things that really impressed me. Its simplicity in deployment, creating backup jobs and recovering from backups. After all what’s the point of a backup tool if you are unable to use it to recover your data.

Just like before, your backups can be used to do both file level restore (FLR) and for the lack of a better term bare metal restore. What’s even better is the de-duplication ratio, which they market for it to be around 25:1 and my test last year, and this time around confirmed that as well.

So what’s the big deal? Are they simply re-releasing the older version with a new version number? Not really.

Instant Recovery:

One of the features I absolutely adore. This first came out with 6.0. What it does is pretty awesome.

The feature gives you the ability to recover a VM with little to no downtime. Yes you heard me right. Little to no downtime. How? What it does is pretty straightforward. Instead of retrieving from backup and doing a bare metal type restore, with Instant Recovery you are basically turning on the VM in a temporary data store that is created on the backup storage itself and presented to the ESXi host.

As soon as your VM turns one, you can simply storage vMotion it into the desired data store or its final resting place. But what if you are not licensed for storage vMotion? No problem, someone thought about that already. In comes “PHD motion” which will move your VM from the temporary storage to the production storage and yes it will merge all the changes during the move as well. All changed data is written to a place called “write space” which is used to make sure the PHD motion moves your VM to its final location with all the changes in place.

But there is new feature that the write space serves as well. And this by far is my favorite feature in this product. Now there is another type of recovery that I will be going over later in this post. This new recovery method can really work great with our next feature.


For the last few years, everyone has fallen in love with the word cloud, even those of us who don’t really know what it means. Now that the definition of cloud is beginning to take some shape and it has been demystified for the most part, the next logical thing is to make use of cloud in our organizations.

Ever had to ship backup tapes to remote locations due to requirements that were set? In my experience dealing with tapes, tracking them and maintaining their rotation, delivering and receiving these bad boys isn’t something anyone looks forward to. But how else can we place our backups in a remote location and meet all the business requirements? Perhaps by storing our all-important data in the cloud?

That’s right –  all those things that you can do with the VBA today. Imagine if your underlying backup storage was not managed by your SAN/NFS admins but instead resided somewhere in the cloud. Guess what? It’s possible now. With PHD Virtual Backup 6.2 you now have the option to backup your data into the cloud. Popular storage providers like Amazon’s S3, Google’s Cloud storage, RackSpace, CloudFiles and OpenStack can all serve as your backup now.

Of course the next question is how hard is it to setup? Not hard at all. To give you an idea, I basically did a few tests on my local storage first before deciding to use up the limited bandwidth from my home connection. And it worked flawlessly. You are really not doing anything else besides telling the VBA where in the cloud this needs to be backed up.



Once you select your appropriate provider you will then provide some specific information like your access key ID, secret key and your bucket name. For my tests, I used the S3 option (This is not an endorsement. It just turned out to be the best option for me. Please do your own research). One gotcha right away was my bucket name. I was having issues getting my cloud info entered and it kept complaining about not being able to access the storage. Turns out if you are using S3 as your storage and your bucket name is all caps, you will have this issue. The fix is to not use caps. Obviously this was covered in the release notes that I was supposed to read but silly me. I wanted to point out this little known issue because I am sure there are more people out there like me who deploy first and read later or only when they hit a wall.

One more thing to keep in mind is the de-duplication ratio with PHD Vrtual. With cloud storage we are not only talking about disk space in the cloud but also the bandwidth to push that data around. I think this is where this product really comes in handy. It backsup all that you need without taxing you for resources as one may expect. And with cloud storage, that becomes even more important. 25:1 is what they market, and that is around the number that I have confirmed in my tests also.

Remember I mentioned another task that’s handled by the write space? So here it is. The write space is also used for caching data locally before shipping it out in the cloud. So the two jobs write space is responsible for are instant VM recovery and cloud storage backup.

So how much space should be allocated to the write space? That’s a good question and it really depends on what’s happening in your environment, how much data is being changed everyday and such. Below is extracted from the help document that discusses on how to figure out how big this space should be.

“A guideline for selecting the right size Write Space disk is to calculate at least 50% of your daily changed data, then attach a disk that can accommodate that size. Typically, changed data is about 5% of the total of all virtual disks in your environment that are being backed up. So for example, if you were backing up ten VMs every day that totaled 1 TB, you should attach a virtual disk that is at least 25 GB (Changed data is approximately 5% of 1 TB, which is 50 GB; of which 50% is 25 GB). Write Space can be expanded at any time by adding additional attached virtual disks.”

Of course, because this space will be shared by two tasks (cloud backup and instant recovery), it only makes sense to set thresholds on each task so that one doesn’t bully the other. With the slider in the configuration for write space, you can set how much of the write space can be used for cloud backup. The rest is used for Instant Recovery. Pretty simple aye!

Rollback Recovery:

With the backups being sent to the cloud, it would only make sense to have the ability to do restores from cloud as well. Most other products struggle to do this well for VMs, as they sometimes want you to pull all the backup files locally first, and then you can recover from them.

With Rollback recovery, you can restore your VMs to a previous point in time recovering only the changes from the selected backup over the existing VM. This feature first came out in 6.1. and will certainly compliment the CloudHook feature in 6.2. This obviously means a few things.

  • Restores will be super fast
  • They will consume less bandwidth as less data will be sent across your site and the cloud storage provider
  • And last but not they least, you can finally meet those RTO RPO that always seemed unrealistic

The only thing to note here is that its highly recommended to take a snapshot of the VM before doing a rollback recovery on it. With this approach if the communication between you and the cloud breaks, you can at least revert the VM back to its original state. Obviously the brains at PHD virtual already thought about this, which is why the default behavior is to take a snapshot.

How can you select between the types of restores that are available? Pretty simple, select the appropriate radio button in the screenshot below and move forward.



Of course, rollback recovery is also available if you don’t employ cloud storage for your backups. You can still use it for your on-site backup, I just used the example of cloud because it appears to sound more fruitful in the case where our backup data maybe sitting thousands of miles away.


I am absolutely not a backup expert and this review is definitely not a full review of the entire features this product offers. I have basically extended on my previous review of the tool from a previous version and only discussed two new features that I think are very good. As I mentioned before, the best thing for me is the products simplicity where a person like me who does very little with backups and restore in his day job is able to deploy and test a product like this one.

Part of what’s happening in IT today is convergence of technology. With it, the next thing that will soon follow will be the convergence of roles we have today. With the silos being taken down all around us, its important that we as IT professionals are capable of exploring tools that do tasks other than what we specialize in. Though I may not be a backup expert, I can fully test and deploy a tool like this within minutes. This helps bring the convergence of roles to life to some degree.

This is a great solution for an environment that is a 100% virtualized. But if you are like most organizations that have at least a few physical machines (other than those oracle boxes hehehe), you should have some other ideas in mind about how to go about backing up the physical machines.  PHD Virtual backup is only for VMs. You are on your own for the physicals or will have to invest in another backup solution for that.

Each VBA can only let you select a single type of storage for backups. What this means is if you want your data to go to your cloud storage and also have a backup locally available, you will have at least two VBAs. One will backup data to the cloud the other will do it to a local NFS, CIFS or VMDK attached to the VBA. The good thing is you can manage both using the same console as both VBAs will be part of the same vCenter. The bad thing is that your VMs will be backed up twice in order to go to two different places. Would have been nice if there was a way to backup only once and send it to two destinations.  From what I have been informed, in Q2 with the release of 6.3 this issue will be addressed.

Cloud Storage is a great idea for storing your backups. Like everything else, one must do a cost benefit analysis figure out the amount of data changes, the bandwidth needed to push backups into the cloud, the cost for purchasing space in the cloud and most importantly the value of data that is being backed up. You have a few very good choices in service providers that work with the product already. In any case, I personally believe the tapes have served us well and now lets put them where they belong, in a museum that is. Using cloud storage for backup is definitely looking into the future and PHD Virtual backup does an excellent job in simplifying a very complex task. And with rollback recovery, restores also become very fast. After all, why backup when we cant restore within a workable window. I urge you to try out the product in your labs. You will be pleasantly surprised as I have been for sometime now. Just be sure to remind yourself that you are not a backup expert, this product has a tendency of making you feel like one. :)


]]> 5
Moving some vDatabases Wed, 20 Feb 2013 18:16:18 +0000 So at some point you will find yourself doing this. And most of us may have already done at least some of these tasks at least once already. Moving the databases of our all important applications. This is obviously not a complete list, I plan on adding more to the list as I come across more ‘stuff’. Hopefully this will come in handy when you are asked to produce a SOP for helping decommission your old DB servers. This will be extremely helpful if that request comes in on a Monday morning when you are coming back from a long vacation.

Obviously if its not already mentioned in the links below, dont forget to update your ODBC DSN (DataSource Name) settings if your application is using one (example vCenter, VUM etc). The links cover mostly what to do on the application side before/after moving the databases. Not so much on how to perform the data migration. You can rely on your DBAs for that.

Move vCenter Database

Move SSO Database

Move vCloud Database

Though I have never done this, my gut tells me that view composer DB move requires an ODBC update. I am looking for both the composer and event DB migration instructions to add to the list above. If anyone already has a link please share that with me.

As always, be sure to look at the versions in the links and the version you are working with in order to produce the desired results.

]]> 0
vCartoon of the Week (01/31/2013) Thu, 31 Jan 2013 22:00:28 +0000

]]> 2
When SSO goes bye bye, things get interesting Tue, 29 Jan 2013 22:22:01 +0000 So until today I was under the impression that SSO only effects the web-client in 5.1. The way I understood was the vSphere client still behaves the way it did before and SSO is not engaged unless the web-client is used to login. This also brought me to the conclusion that if SSO goes down, one cannot login via the web-client but the vSphere client can still be used. Wrong!!

A colleague of mine pointed me to a this page that clearly states the following:

How does SSO integrate with the vSphere Client?

SSO does not integrate with the vSphere Client. However, when you log in through the vSphere Client, vCenter Server sends the authentication request to SSO.

Once I read that I started doubting my thought process and the importance of SSO in 5.1 Apparently all access to vCenter must be down once SSO is down (both via web and vSphere client).

After doing a lot testing this is what I found (vcenter 5.1 build 799731). When SSO is down,

  • access via web-client is down as expected 
  • access via vSphere client is flakey

What does flakey access mean? Well, I got mixed results and was finally able to see some pattern. When SSO service is down, I was able to login with the account that has had a successful login while SSO was running. The important thing here was, “use windows session credentials” had to be checked for which I had to be logged in with the account that had successfully logged in when SSO was up. If I didnt check the box and entered the credentials myself, it told me the username and pwd were incorrect. I know I can fat finger keys at times but I tested this over and over to come to this conclusion. It wasnt me. Access was only allowed when the check box was checked.

This also meant any new account that was created or granted access couldn’t login using the vSphere client. Rememeber we only had luck with accounts that were able to login succssfuly prior to SSO service going down. And that too required the checkbox to be checked. If the account was just created or granted access after SSO went down, the screen showed the beautiful message on the right. The same message was received if the account didn’t successfully login while SSO was up. Why cant this message say the SSO cannot be reached is beyond me. By the way the web-client will tell you “Failed to communicate with the vCenter Single Sign On server” when SSO is down. So thank you VMware for doing that. 

Another thing to keep in mind. When SSO service is down, your vCenter service continues to run. However, if you attempt to restart your vCenter service you will find yourself in trouble. I was unable to get the vCenter service to start with SSO being offline. Which makes SSO even more important. Yes even with vCenter down your VMs continue to work but there are other vCenter specific features that will not function like DRS, sDRS for example. And if this vCenter is connected to a vCloud instance thats another can of worms.

So the bottom line is, SSO is very very important. It has two parts, the application and the DB part. VMware has done a great job in giving the option to install SSO as single, clustered or even multi-site type deployments. The high availability in the application side is thought out there. However, the problem is DB. VMware does not fully support SSO DB on a SQL cluster. As a matter of fact, there have been known issues that have come about when trying to deploy SSO using a SQL cluster. So the real option with full support is a stand alone SQL node. But that also creates a single point of failure. When the DB goes down, you are unable to login using the web-client, you maybe able to login using the vSphere client and all other things we discussed above.

So building redundancy is extremely important. VMware’s recommended solution is to use vCenter Heartbeat. We all know that can be a pricey solution. However, if full support along with redundancy is importnat to you, that is the way to go. I hope VMware extends their full support to at least allow running DB on a SQL cluster for all their products including vCenter (which is still a grey area). That would be the right thing to do. Heartbeat provides added functionality and there will always be a market for that as well. I hope full support on DB residing on SQL clusters is not further delayed in the interest of the vCenter Heartbeat product.

In the end I will borrow Tom Petty’s words to tell VMware “Don’t do me like that”…

]]> 6
Kemp ESP – Microsoft Forefront TMG and beyond Mon, 28 Jan 2013 17:47:25 +0000 A fellow vExpert pointed me to a product that I think is pretty cool and will probably fit the need of many who have been scratching their heads after the end of sale for Microsoft Forefront Threat Management Gateway. With virtualization and cloud computing, we tend to think that security is no longer needed. In my opinion, security has become even more important and has to be reworked with the way infrastructure is deployed these days. Specially in a multi-tenant environment.

Anyways, sticking to the topic here. Kemp Technologies is introducing there Edge Security Pack (ESP) which will enable you to continue deploying solutions like Sharepoint  MX etc securely. I am not an expert in this area but I try to keep up with whats happening around me. You can read up more on the solution here. Remember, just because your solution is virtual or in a cloud, doesn’t mean you no longer need security. It’s still required, and it has to be up to par with todays technology. Below is the introductory video of what the Kemp ESP is expected to do.


]]> 0
Disappearing data after a migration – Access-based Enumeration Thu, 24 Jan 2013 23:26:06 +0000 One thing I love about windows is the fact that there is still a whole lot left for me to learn about windows. Not just big infra services that windows provides but little additions that have come about in the recent windows versions. Now I know what I am about to talk about is not exactly new, it was first introduced in Windows 2003 SP1. But this was something that fell off my radar and I didn’t really notice it until recently. So, I will try and publicize this in case it fell off your radar also.

So let’s say you have a share that users access. The share resides on a file server that runs Windows 2003. The share has 50 folders and not all users have access to all folders. When a user tries to access a folder to which access is denied, an access denied message appears. Awesome! Now you decide to take advantage of a newer OS and we will use Win 2008 R2 as an example. So you migrate the data over to the 2008 R2 file server and enable the share. Suddenly pigs starts flying,  your wife starts agreeing with you and Patriots start beating the Giants in the Super Bowl game. What happend?

So you start hearing things like, “Hey, I can only see 20 folders, before I was able to see 50″. You have people throw out all kinds of different numbers and your initial reaction is what in the world! Of course the folders are all there and you can confirm that when you login to the server yourself with your Administrative access. So what happened? Why are users not able to view all folders in the share? Well, your windows server just got a little more secure.

What you will start noticing is users are only able to see folders they have access to. So, if a user only has access to the “Finance” folder in the share, when this user accesses the share the only folder that will appear out of the 50 folders this share has is the “Finance” folder. Pretty nifty aye! So, if one doesn’t have access to a folder, the folder will be invisible. This is happening due to a feature called “Access-based Enumeration”. You can read more about it in this article. And yes, this is enabled by default. 

So the obvious question is, can this be disabled? Well without getting into why would you want to do this and all, the simple answer is YES. On the 2008 R2 file server, you will basically go to the properties of the share using the “Share and Storage Management” console.

Once there click the “Advanced” button in the “Sharing” tab and there it is. Unchecking the checkbox will disable this feature and your environment will be vulnerable once again. Your users will start seeing folders they dont have access to. My advise, leave it enabled. Why tease them when they can’t access it? :)


]]> 2