Cloud-Buddy http://www.cloud-buddy.com Shrink your datacenter and move it around - Bilal Hashmi Wed, 05 Nov 2014 17:31:17 +0000 en-US hourly 1 http://wordpress.org/?v=4.0.1 Good Ole P2V http://www.cloud-buddy.com/?p=2057 http://www.cloud-buddy.com/?p=2057#comments Wed, 05 Nov 2014 17:30:03 +0000 http://www.cloud-buddy.com/?p=2057 This year I havent had much opportunity to blog due to the type of work I have been involved in. However, as soon as I find the opportunity I grab it and this one happens to be about P2V. I recently had to use P2V after a long time for a POC I cant share details about at this time. Having not done a P2V in ages, it will be safe to say that my confidence was sort of crushed from some of my initial tests. Here are some of the things I learned.

If your source machine, your P2V server (where convertor is installed), your vCenter and your target hosts have any firewalls between them, please go to this this link and make sure all the communication channels are open.

In my tests there were firewalls in play and we opened the ports accordingly. Once the ports were open we did a P2V and everything seems to have worked after some issues we faced with SSL. Once the SSL was taken out of the equation everything seems to have worked just fine.  The machine was imported to the target vCenter and powered on with no issues. Seemed pretty standard. However our later tests kept failing which was quite frustrating. The job would fail after a few seconds of being submitted with the following error:

“An error occured while opening a virtual disk. Verify that the Converter server and the running source machines have a network access to the source and destination ESX/ESXi hosts.”

BTW the error above is my least favorite as it could really mean so many things. The key is always in the log files. After looking through the log files of our repeated failed attempts, we noticed something interesting in the worker log file:

2014-10-29T11:18:21.192-04:00 [03652 info ‘task-2′] Reusing existing VIM connection to 10.194.0.120
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: Get virtual disk filebacking [Cluster-1] PhysicalServer/PhysicalServer.vmdk
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: updating nfc port as 902
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: get protocol as vpxa-nfc
2014-10-29T11:18:21.207-04:00 [03652 info ‘task-2′] GetManagedDiskName: Get disklib file name as vpxa-nfc://[Cluster-1] PhysicalServer/PhysicalServer.vmdk@ESXi4:902!52 c3 66 97 b0 92 a0 93-38 cd b9 4a 17 f8 e0 00
2014-10-29T11:18:33.048-04:00 [03652 info ‘task-2′] Worker CloneTask updates, state: 4, percentage: 0, xfer rate (Bps): <unknown>
2014-10-29T11:18:33.048-04:00 [03652 info ‘task-2′] TargetVmManagerImpl::DeleteVM
2014-10-29T11:18:33.048-04:00 [03652 info ‘task-2′] Reusing existing VIM connection to 10.194.0.120
2014-10-29T11:18:33.064-04:00 [03652 info ‘task-2′] Destroying vim.VirtualMachine:vm-810174 on 10.194.0.120
2014-10-29T11:18:34.062-04:00 [03652 info ‘task-2′] WorkerConvertTask: Generating Agent Task bundle for task with id=”task-1″.
2014-10-29T11:18:44.467-04:00 [03652 info ‘task-2′] WorkerConvertTask: Retrieving agent task log bundle to “C:\Windows\TEMP\vmware-temp\vmware-SYSTEM\agentTask-task-1-vmepkywh.zip”.
2014-10-29T11:18:44.483-04:00 [03652 info ‘task-2′] WorkerConvertTask: Bundle successfully retrieved to “C:\Windows\TEMP\vmware-temp\vmware-SYSTEM\agentTask-task-1-vmepkywh.zip”.
2014-10-29T11:18:44.483-04:00 [03652 error ‘Default’] Task failed:

One thing that stood out was the hostname that I have marked above in red. We were exporting the physical machine to a host in Cluster-1. When going through the wizard it allows you to select an individual host inside a cluster however that doesn’t really mean much. Our cluster was an 8 node cluster and we noticed in every failed attempt the job was trying to pick any of the 8 hosts in the cluster and not the one we selected (ESXi8). Also, right before submitting the job we noticed that though a particular host was selected as the target, at the summary page the target location was the cluster and not the individual host.

So remember I mentioned we have firewalls between our hosts, vcenter, source machine and P2V server? When we opened the ports we figured that we will only need access to a single ESXi host as the P2V wizard was allowing us to select an individual host in the cluster. As it turns out because this host was in a cluster all hosts in that cluster will need the communication channel open as it was evident through the worker log file. Once we made that change, the P2V worked like a champ over and over. Another option would have been to take our target host out of the cluster.

So, how did we get it to work the first time? I blame that on luck. We went back to the log files and confirmed the first time it actually worked, it happened to have selected the host we opened the ports for. I should have bought the lottery ticket instead :(.  Luckily we tested more than once and were able to confirm all that needs to be in place for this to work. I may have known this sometime ago but being away from P2V for sometime it was refreshing, challenging and rewarding to finally get it working. Convertor version was 5.5.1 build-1682692.

Lesson learned? Do lots of testing.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=2057 0
PHD Virtual Backup (v 8) http://www.cloud-buddy.com/?p=2039 http://www.cloud-buddy.com/?p=2039#comments Mon, 07 Jul 2014 14:54:39 +0000 http://www.cloud-buddy.com/?p=2039 Recently I got an opportunity to play around with Unitrends backup solution, which is still in beta. For those of who are not aware, PHD Virtual is now part of Unitrends. At first I was a bit skeptical because I have seen products being wasted after an acquisition. But in this case I think I am quite happy with the progress the product has made under the new name.

My goal is to share my views on my testing of the product and also try and explain the new architecture. Once you get a hang of it, its really simple and makes a lot of sense. In my initial attempt, I sort of screwed things up but that was mostly because I was being hasty and didn’t bother reading the few simple concepts.

Like in previous version, the one we will discuss here, version 8 revolves around the good ole VBA. However now the VBA does a lot more and has delivered on the promises in the past. For example: my previous complain was lack of single pane of glass to manage multiple VBAs, that’s no longer a problem here. In fact they went a step further and made it possible to manage VBAs that could be deployed in Citrix or even Hyper V environments.

Architecture:

In version 7, the concept of appliance roles was introduced and version 8 continues with this model. Each appliance can be dedicated to a single role or a single appliance can have multiple roles configured. So what are those roles?

Presentation (P) – The Presentation appliance is the appliance running the web-based interface you use to configure and manage your installation. Only one presentation appliance is necessary per installation, across all configured environments (including across all hypervisor types). All management and configuration of PHD Virtual Backup occurs through the Presentation Appliance’s web interface.
Management (M) – Each Environment requires one appliance designated as the Management Appliance. This appliance performs inventory and other hypervisor-specific tasks and manages the work of the Engine appliances. Each environment you add to your PHDVB deployment requires the IP address of one appliance to act as Management appliance. The Presentation Appliance can also be designated as a Management Appliance.

 

Engine (E) – Engine appliances perform the actual data processing and send data to their configured data stores. Engine is the most common role an appliance will take on in your deployment. Appliances with the Presentation and Management role can also be configured with the Engine appliance.

 

Note: As a general recommendation, you will need at least one Engine appliance for every 10 TB of source data you will protect (or every 1 TB of data if using XenServer or 5 TB is using CIFS backup storage).

 

So for any deployment you will need at least one VBA that has the P M and E roles. If the environment is small enough, a single VBA can host all those roles. And then add more engines as the environment grows. You can tell someone was thinking scaling. The latest version of the beta also incorporates a tutorial video that gives you a high level view of what the VBA roles may look like when laid down. This will also assist you in determining what will be ideal for your environment. I highly recommend you watch this before moving forward with the full configuration.

Deploying:

I am stealing a diagram from Unitrends documentation to explain how the roles can be laid out.

Deployment single hypervisor

In the example above, as you can tell three VBAs are deployed. It’s a very small two host environment but it maybe a good idea to isolate the engine for the P and M roles if expansions are anticipated down the road. Technically you could with a single engine unless you needed different kinds of backup datastores to backup the data. Yes the engine does NFS, CIFS and attached storage, however you can only have one type of storage per engine. So in the example above, you could assume maybe VBA2 uses NFS and VBA3 uses attached storage. Its also possible that the hosts are running really large VMs that exceed the 10TB limit per engine and that’s why two VBA are deployed.
What could be another reason for two VBAs in the example above? Is it possible that host1 is running esxi and host2 is running Hyper-V? I would say YES its possible to manage two different environments using the same portal but there is one important piece that we are missing, the engine. Take a look at the diagram below:

Deployment multi hypervisor

The diagram above is identical to the one before it with one key difference. Host 2 also has a manager VBA which means the example above could very well work for environment 1 being vSphere based and environment 2 being Hyper V or even Citrix based. Isn’t that awesome!?

Your engine is attached to a disk where the backups are stored. When attaching the storage, you can set encryption to be on if helps you sleep better or some compliance has that requirement. You can also set the compression rate of the backup and block size. Below is some useful information to make those assessments.

Backup Compression:
• Use BDS Setting – The default setting, this option will use the compression setting applied to 
the backup data store. The default setting at the backup data store is GZIP.
• Uncompressed – No compression is applied to the backup data. This can be useful (or required) when using storage devices that perform additional compression.
GZIP – This compression method performs the highest compression level resulting in reduced amount of storage space used. The additional compression used by GZIP will impact performance – backup speed and overall backup time.
• LZOP – This method results in less compression than GZIP but in most cases will impact performance much less, resulting in compressed backups faster than the GZIP alternative.
Block Size:
• Use BDS Setting – The default setting, this option will use the block size setting applied to the 
backup data store. The default setting at the backup data store is 1 MB.
• 512 KB, 1 MB, 2 MB, 4 MB – The size of the backup block used to store backup data should not be adjusted in most situations unless instructed to by support. Adjusting the block size can impact backup and restore performance.

My wow moment:

The deployment is really simple. If you can deploy an OVF you are gold as long as you understand the architecture. Because I didn’t in the beginning I managed to screw up my initial deployment. The folks over at Unitrends were kind enough to get on a call with me to take a look at my issues. On first glance I was informed that I need to update my appliances just like any good ole vendor would suggest. My eyes started to roll until I found out how simple that really is.

Update

 

 

The folks over at Unitrends sent me a link to a file which took a few seconds, I unzipped it went to the config tab of the portal on my VBA, hit upload, selected the file with the .phd extension and hot save. And that was it. So I figured if I only have to do that to each VBA it shouldn’t be too bad. But I was informed, because I did it from the presentation VBA that knows the managers and the engines in my environment, all the appliances will be updated and I will get a message once that completes. Sure enough 10 mins later I got the message and as I hit OK, all the branding had changed from PHD to Unitrends and all my appliances were updated. Way better than my original plan of redeploying with the latest appliance. I know this doesn’t highlight the tools backup abilities, but I think the tool itself has proven it can do backups and more over the years. But to me the manageability of your backup solution is key.

One key thing to note is there is also an auto update feature that will pretty much negate the need for you to update the appliance at all as it will happen on its own.

The portal and UI:

When you login to the portal you get a holistic view of your entire environment. This includes number of VMs, number of VMs that have been protected with backups, recent restores, jobs, alerts, available storage etc all in one place. Don’t forget you could be pulling this information from your entire environment that could be hypervisor agnostic. I think this dashboard brings good value to the table and gives a good high level on things. You can hit the cog on the right to change the frequency of how often this data is updated, the default is 5sec.

Portal UI

 

The errors and some alerts are really my doing. I didn’t realize it was a bad idea to try and backup 400gb of data on a 40gb backup volume. But I was quickly able to add more storage and get past that.

The rest of the tabs are pretty self-explanatory. The protect tab allows you backup, replicate VMs, the Recover tab exactly what it says. You can do full restores or even File level restores. One of the things I like is the ability to change he mac on the VM when doing a full restore, its good to see features from the past are still baked in. The Jobs tab lets you create new jobs, monitor them and view recent jobs.

Reports

 

 

I really like the report tabs because it presents all the useful information you can think of right out of the box in a very pretty way. Way prettier than I can ever make it because I just don’t have that kind of patience or talent. There are reports around VMs, virtual appliances (this tells you the version your VBAs are running), replication, archives and storage. So there should be all kinds of reports for you to pass around in meetings not just with pretty colors but also very useful information.

The Configure tab is where all the magic happens. This is where you add the multiple VBAs and assign them the appropriate roles. This is also where you can do the update all my VBA magic with a single click. Your SMTP info for email alerting and also the credentials for FLR are stored here.

Lastly another thing that’s covered in this tab brings me to my next topic.

Licensing:

The solution gets installed with a trial license at first. And once you realize you kinda like this, they want money. Urgghh those evil people. Jokes aside, this is also a key factor for a lot of organization on how much will it cost. I don’t have the pricing information at this time. However I do know the product is licensed on a per socket basis. Meaning the sockets of the hosts. So if you have 10 hosts with 4 sockets each that you need to protect all the VMs on you will need 10*4 = 40 licenses. Hopefully the cost will be acceptable like in the past.

Conclusion:

Overall I really like the solution. I didn’t cover a lot of the actual backup technology used in the back because most of that has been covered by others and myself in the past. I will post a few links to my previous reviews. I wanted to point out all the new things that I saw and wanted to go over the architecture so that you understand what will it take to deploy this product. If you have half a working brain like myself, all you will need is this post and access to a vCenter where you can deploy the OVF. And you will feel like an all-important backup guru like myself (that’s a joke, I am anything but). Just make sure you watch the tutorial video once you eploy the first VBA, that will really make things extremely simple for you.

The one thing I really like was the way appliances are updated and I think I made that quite clear when I was on the phone with Unitrends. Like everything else there are things that one can desire. Though this is kinda hard to come up with at this point, but I think instead of deploying the VBAs from vCenter, it would be really nice if only the first deployment is needed from vCenter and the rest could be handled from within the solutions portal. Perhaps there could be a report that suggests adding more engines etc to improve performance maybe. The data is already there it seems like, just need to tie it all together. In the end, good job PHD virtual you have done great. And Unitrends don’t let us down because we are spoiled.

Previous reviews:
CloudHook will hook you up – PHD Virtual 6.2

How reliable is your DR plan?

]]>
http://www.cloud-buddy.com/?feed=rss2&p=2039 0
New Toy from PHD – RTA Caluclator http://www.cloud-buddy.com/?p=2019 http://www.cloud-buddy.com/?p=2019#comments Sat, 30 Nov 2013 10:13:57 +0000 http://www.cloud-buddy.com/?p=2019 PHD Virtual recently released a free tool called the Recovery Time Calculator that enables the calculation of time it will take to recover virtual machines and critical applications in the event of an outage or disaster. Did I mention its FREE? Below is the press release.

Dubbed the RTA Calculator, for the ‘Recovery Time Actual’ estimate it provides, PHD’s free tool can be easily downloaded and then immediately provides visibility into what your organization’s actual VM recovery time would be in the event of an outage.

The RTA Calculator has a built-in wizard to connect to VMware. Once installed you are prompted to select the VMs you wish to time for an RTA estimate, and set the appropriate boot order. The RTA Calculator will then take a snapshot and create linked clones for each VM. Due to the use of snapshotting and linked clones, the VM creation process is very quick. The tool then simply powers up the VMs and times the process, calculating the total time it will take to recover that grouping of VMs – it’s that simple! This gives you an accurate Recovery Time Actual you can use to compare to your Recovery Time Objective and determine if you’ll be able to adhere to your SLAs.

Run the RTA tool as often as needed to produce an estimate with different production loads.

What You’ll Need

System Requirements

Like all PHD Virtual products, the RTA Calculator is highly effective while still maintaining ease of use. It requires no training or product documentation. All you need to know is contained within in this short video demonstration

Other than that just ensure you meet the following 3 requirements

  • The RTA Calculator is a Windows application that requires an Administrator account and .Net 4.0.
  • The RTA Calculator supports VMware ESX or ESXi with vCenter 4.0 (or higher) with default ports.
  • The RTA Calculator will need the VMware guest tools installed.

Download the FREE tool here: http://www.phdvirtual.com/free/rta-calculator

]]>
http://www.cloud-buddy.com/?feed=rss2&p=2019 0
CloudPhysics and Admission Control tunning http://www.cloud-buddy.com/?p=2013 http://www.cloud-buddy.com/?p=2013#comments Fri, 06 Sep 2013 21:46:27 +0000 http://www.cloud-buddy.com/?p=2013 A while back I made a little demo video that showcased one of CloudPhysics cards that is still my personal favorite. I figured it would be a good idea to share it in case there is anyone out there who hasn’t taken CloudPhysics for a spin yet.

 

]]>
http://www.cloud-buddy.com/?feed=rss2&p=2013 0
Recovery Management Suite http://www.cloud-buddy.com/?p=1968 http://www.cloud-buddy.com/?p=1968#comments Tue, 13 Aug 2013 14:44:14 +0000 http://www.cloud-buddy.com/?p=1968 My last few posts have been about backups and DR and I have intensively covered PHD Virtual’s products and their capabilities. I have covered PHD Virtual backup, CloudHook (my review) and  just recently I also covered ReliableDR (my review). Just recently PHD virtual has released their Recovery Management Suite (RMS) that ties all these products together and delivers an extremely powerful solution that ranges from simple backups to a whole site failure (without making it an extremely complex undertaking).

 

RMS1

So why is this such a big deal. There are various products out there with similar capabilities. And that is true. However I cant think of a single product that does all that RMS offers and still manages to keep it simple. For example lets take a look at SRM (not trying to talk trash but simply using it for comparison). The business centric view of ReliableDR, DR automation and the ease of use in a multi tenant environment for cloud providers sets this product in a league of its own. Now if we throw the other two products into the mix (Virtual backup and CloudHook), PHD Virtual starts to sound like a real complete solution. And it truly is. Did I point out that PHD virtual has its own replication mechanism that can be used to replicate date between sites?

I really like SRM too. However being a technologist I like to compare products and see where one serves a better purpose. Lets put SRM in the back seat for now and lets call on Veeam for change and see how some of its features stand against RMS. I am a big fan of Veeam backup and replication also. However there is some real planning that goes into setting up Veeam’s backup manager, proxies and repositories. In the case of PHDVB things are pretty simple. And once you tie in cloud storage things get even better. And just like Veeam PHDVB also offers support for multiple hypervisor, but the ReliableDR portion is what puts PHD Virtual’s products miles ahead of Veeam in my opinion.

For the first time, companies of all sizes have a unified, affordable solution that automates and assures recovery processes in order to reduce risk, decrease recovery times, and help sustain the business throughout any issues that might occur. PHD Virtual RMS is the only solution to offer comprehensive, integrated recovery that addresses the entire recovery continuum. IT provides unified data protection and disaster recovery capabilities delivered through integrated backup, replication, recovery and DR orchestration that is powerful, scalable, easy to use, and delivers immediate value “out-of-the-box”

RMS2

Obviously nothing is life is perfect and so is the case with RMS and its components. There is room for improvement and I am more than certain that the improvement will come before you know it. I am saying this from experience. PHD Virtual has made tremendous amounts of changes and enhancements to their products over the last few years that could only compare to a handful of organizations. What may seem like the next logical step actually does happen at PHD Virtual. And I hope they maintain that trend. As a matter of fact, some of what I thought should have been included in the future releases of the products also made it. For example I wondered why all the PHD products weren’t being offered as a single solution that worked together like one.. tadaaaaa! So there, an organization that can keep a character like myself pretty satisfied has a lot to offer in years to come.

Some of the latest enhancements coming in the upcoming versions of RMS are as follow:

  • Automates the replication of backup data from a source Backup Data Store (BDS) to a remote BDS (including cloud storage).
  • Only changed data is moved from source BDS to archive BDS (bandwidth saver and more importantly data the cloud will not move from production, it will likely move from your primary BDS, hence not even touching your production load).
  • The archive BDS can have different retention policies than the source (I complained about this).
  • You can configure individual VMs to be replicated, or synch the entire BDS with the archive location.
  • You can configure multiple source BDS locations to be archived to a single archive BDS location.
  • The archive BDS supports global deduplication across all source BDS data
  • Configure a generic S3 backup target to leverage additional object storage platforms, other than Amazon, Rackspace, and Google (Yoooohooo! more options)
  • Certified Replica – Automating the testing of virtualized applications that have been replicated using PHD Virtual Backup
    • A new CertifiedReplica job exists in ReliableDR for configuration of recovery jobs against PHD VM replicas
    • These jobs can be used to automate complex recovery specifications for failover and testing of PHD VM replicas. What kinds of complex jobs? For example:
      • Boot orders
      • Network mapping for test and failover
      • Re-IP for Windows AND Linux (Unlike Veeam here you can re-IP linux machines :))
      • Application testing and other recovery tasks
    • Verification can be scheduled at intervals as frequently as every 4 hours
    • Hypervisor snapshots are used for CertifiedReplica recovery points and other testing/failover tasks, making the entire process storage agnostic

And lastly some rumors. I have heard that ‘Certified Backups’ are also in the works. Yes machines are taking over! Look at what certified backups have to offer if it makes it through.

  • Automating the testing of VMware virtualized applications directly from backups using Instant Recovery from PHD Virtual Backup
    • A new CertifiedBackup job that initiates Instant Recovery jobs within PHD Virtual Backup and automates boot orders, network reconfiguration, and other recovery tasks
    • These jobs can be used to automate VM recovery specifications for testing of PHD backups
    • When testing is complete, Instant Recovery sessions are terminated automatically
    • Verification can be scheduled at intervals as frequently as every 4 hours

When you look at all the capabilities tied into a single product, you have no choice but to give it a go. Like I mentioned earlier I like other products too but if you are looking for a single solution that addresses your backup and DR needs, you have to give RMS a go. It will be silly to overlook it. I hope VMware Veeam and other vendors try and come up with their versions of RMS as well (though it will be interesting to see VMware come up with a solution that supports other hypervisors also).

I have been asked a couple of times in the past if ReliableDR supports vCloud Director and the answer is YES, it does. Try RMS for free today.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1968 2
DR Benchmark http://www.cloud-buddy.com/?p=1953 http://www.cloud-buddy.com/?p=1953#comments Tue, 16 Jul 2013 20:15:29 +0000 http://www.cloud-buddy.com/?p=1953 There is an ever growing need for having some kind of a business continuity \ disaster recovery plan that keeps the ball rolling when the unexpected happens. There is an enormous amount of data that is being collected everyday and businesses can’t afford to lose it. It’s valuable. It’s not only important from a business perspective as it aids their purpose and mission but I also believe that the data being captured today marks a new way of recording history like it never has in the past. And, we are the generation that is making it possible. As it is our biggest contribution in spreading the wealth of knowledge and information, it becomes extremely important to protect it.  For those reasons I believe disaster recovery should be an integral part of any implementation that is made in today’s day and age.

Now the last few years have been a lot of fun for any techie. So much has changed just in the last 4-5 years. So much is changing today and the near future promises a lot more changes. Of course most will be good and some will be made fun of down the road. Due to all the changes that have come about in the datacenter over the past few years there has been a paradigm shift. For example, shops that required change windows to restart a machine are now moving their machines from host to host (vMotion) during production hours and in most cases letting the computer figure that out on its own (DRS). We are truly in the early stages of a robo datacenter where self-healing processes are being implemented (VMware HA, MSCS, WFC, FT etc).

Of course, we are definitely far from our goals of what the datacenter of the future is supposed to look like. But we are well on our way there. However, one aspect of the datacenter that seems to take a major hit from all the advancements that have come about in the last few years is the all important disaster recovery. I look at DR as insurance, you will need it when you don’t have it and when you have it, it may seem like an unnecessary overhead. But I am sure we have all been in situations where we wished our DR plan was more current, tested, more elaborate and covered all aspects of the business. If you are one of those ‘lucky’ folks who wished they had some kind of a DR plan, join the party of many more like you. You are not alone.

With the tremendous amount of change that has come about in the last few years, the DR side has experienced a bit of a stepchild like treatment. It has become a back burner project in a lot of organizations where it should really be an integral part of the overall solution. With all the technological advancements that have been made recently, a lot of good DR solutions (ReliableDRSRM, Veeam Backup & Replication) have also surfaced that could help one create a DR plan like never before. But lets keep the specific tools and vendors out of this for now. Lets just talk about what are the good DR practices for today and the future? What is it that others are doing that you may not be? Where is the gap in your DR plan? Are stretched datacenters really the answer to your problems? Can HA and DR be tied together? How well do you score when compared to your peers in the industry for DR preparedness? Where can you go to get this type of information?

DRPB_outage-stats_homepage

Your prayers may have been answered.  Recently a group of people including myself have formed a council (Disaster Recovery Preparedness Benchmark) that aims to increase DR preparedness awareness and improve overall DR practices. These are not sales people who are trying to sell you stuff. These are people who work in the field like you and I and have a day job. Their jobs help them bring the diversity to the council and share the knowledge and experience they have gained over the years. The idea is all of that put together will help develop some standards and best practices for the industry to follow and bring some kind of method and calmness to the awesome madness we have witnessed over the last few years. You can take a look at it here. As everything else, we are starting off with a survey that will help you evaluate your current situation. This data will also help bring in more information that will benefit the overall cause. Don’t worry personally identifiable information is only collected when you voluntarily provide it. Start playing your role in taming the DR animal, protect your investment and take your survey here.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1953 1
PHD giveaway http://www.cloud-buddy.com/?p=1947 http://www.cloud-buddy.com/?p=1947#comments Fri, 28 Jun 2013 16:49:06 +0000 http://www.cloud-buddy.com/?p=1947 I have been keeping my eye on PHD Virtual for sometime now as their tools tend to make me feel smarter than I really am. It turns out they are having a free give away where you could potentially win a 1 year free license for either their VMware/Citrix backup solution or even their Reliable DR product that I reviewed just a few weeks ago.

So what do you have to do to win? Simply download their Virtual backup tool for VMware/Citrix or their Reliable DR product, from there on you are automatically entered to win. Here is the post that may have more information. Good luck!

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1947 0
How reliable is your DR plan? http://www.cloud-buddy.com/?p=1940 http://www.cloud-buddy.com/?p=1940#comments Wed, 05 Jun 2013 15:03:17 +0000 http://www.cloud-buddy.com/?p=1940 Over the past few years or so there have been a lot of new solutions that have surfaced. There have been new ways of solving new problems and even ways to solve old problems more efficiently. I have been impressed by the simplicity that PHD Virtual brings to their backup solution, so when I first heard about “ReliableDR”, I had no reason not a take peek.

PHD Virtual acquired VirtualSharp Software earlier this year. VirtualSharp had made some great strides to address the complex problem of disaster recovery. I know we all want a disaster recovery solution and SRM is a great product. Stop laughing! Ok it’s a great product but perhaps not so great to use with all its complexities and nuances. At the end of the day a great DR solution should ask us what we want our RPO and RTO to be and orchestrate a solution based on that. It should not be the other way around. I believe ReliableDR is definitely a step in that direction. Don’t get me wrong, I am not hating on SRM here, I am simply stating the ReliableDR does that in a very simple manner. It’s a tool that doesn’t require an extensive amount of research or consulting hours.

Cloud has definitely been the buzz word for some time now. And automation is key with any decent cloud service. That’s really what makes the magic happen. When you think about it, we have automation all around us. Do we not let our VMs float around from host to host (DRS) and from datastore to datastore (SDRS)? Do we not let the packets flow around in our switches and let them figure out what’s works best for them? Do we not let load balancers determine what will be the best way to handle the load at any given time? Why not let automation come in to creating a DR plan as well? Now there would still be some manual work involved, but you will definitely not be starting from scratch and will be using a lot of information that already exists. I think that brings in a lot of value. That’s not only in cost savings for implementing a DR solution but also a tremendous amount of cost savings in a solution that will actually work.

Some of the key functionality that the product offers:

  • Automated, Continuous, Service-Oriented DR Testing – Maintains the integrity of you DR plan by being service / application centric, not data centric.  It takes a business-centric view of an application and its dependencies and then automates the verification of those applications as many as several times per day.  The typical DR plan is tested 1-2 times per year.  You can test several hundreds or thousands of times per year with ReliableDR!

  • Application-Aware Testing – Measuring of accurate Recovery Time Actuals (RTOs & RPOs)

  • Certified Recovery Points – automatically storing multiple certified recovery points

  • Compliance Reporting – demonstrates DR objective compliance to auditors

  • Test, Failover, and Failback – Automation of failover and failback processes

  • Flexible Replication Options – Integration with all major storage vendors, multiple software based replication solutions including PHD Virtual, and also includes its own zero-footprint software-based replication capabilities

So then comes the million dollar question. If it really does all that and its really simple to setup, then it must be really expensive. So how much does it cost?  I am happy to announce that you could even get it for free. Obviously that means with limited functionality.  So below are the options that one has for the product:

  1. ReliableDR Enterprise Edition
  2. ReliableDR Foundation Edition
  3. FREE

Take a look at this link here for more details in comparing the three options.

My impression of RealibleDR is that it’s great for SMB. However this does not mean that it’s not suitable for large environments. After all they have an Enterprise edition for a reason. The reason I want to focus on SMB as being the target market for this product is the simplicity of the tool without compromising functionality. Obviously this does not mean that SMB’s can’t handle complex tools they certainly can, however more often than not I find SMB shops full of the IT staff that are overworked and understaffed. This makes them a jack of all trades but leaves them with very little time to dedicate and focus in just one area. Such an environment may never be able to get past some of the complex DR solutions that are out there in the market today. For an environment like that ReliableDR will do wonders. Or like Steve Jobs once said it, it will be like a cold glass of water in hell :D.

I believe large environments will also benefit from the automation of DR this product offers, its application awareness, scalability and last but not the least, its business centric view. The ever changing datacenters require a DR solution that adapts with them and understands their dynamic nature. I think just all those things alone combined with simplicity and low cost should make any engineer click this link right now. In times of disaster one needs a tool that works every single time and RelaibleDR offers a very simple way to test the failover and failback without making them an expensive exercise. Moreover, most DR solutions tend to get out of date due to the difference in time between each exercise. With a tool like ReliableDR these exercises can be arranged more frequently and perhaps when there is a true disaster there will be calmness in the datacenters versus the headless chicken dance we often experience.

It makes sense for PHD virtual to invest their simplistic approach to the DR solutions. They have done a great job in the enhancements they have made to their backup solution over the years and ReliableDR seems to be the next logical step. I was scared that I would find complex settings and installation procedures before getting the product going. But I was pleasantly surprised. Here are some webinars that would help one understand the product in more detail.

Imagine if we had a slider that we adjusted to dictate the amount of money we want to spend on our power bill every month and our homes adjusted to that number and aligned the utilization accordingly. That’s the level of simplicity that’s needed in the DR solutions for today. And I believe ReliableDR is definitely on its way there.

 

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1940 2
Vote for sessions at VMworld 2013 http://www.cloud-buddy.com/?p=1914 http://www.cloud-buddy.com/?p=1914#comments Tue, 23 Apr 2013 20:41:07 +0000 http://www.cloud-buddy.com/?p=1914 As you may already know, the VMworld sessions are a result of a rigorous process that starts very early. Before a session is added, its approved, opened for public voting and after going through several processes it becomes part of VMworld. This year I am fortunate enough to have been submitted for two sessions at VMworld. The sessions are now open for public voting and I will appreciate your vote in getting my sessions approved and having the opportunity to speak at VMworld.

You can go to VMworld’s website and cast your vote. The public voting opened earlier today. Once there, simply filter the sessions and type in the session IDs provided below in the keyword fields. That should speed up things. However, its always a good idea to look for other sessions that may interest you. The important thing is for you to cast your vote. Of course, if that vote is for one of my sessions thats even better. You will need a VMworld ID in order to vote. You can also enter my name “bilal hashmi” in the keywords field to display both sessions like the screenshot below. From here onwards, its a matter of simply clicking the thumbs up. If its green like mine that means you voted. Yes, I voted for myself :D

My sessions:

vCenter: The Unsung Hero (Protecting the Core) – Session ID 4873

This session will be regarding the challenges we face with vCenter in 2013. This will cover topics like the importance of keeping vCenter up at all times. With new dependancies in the stack that rely heavily on vCenter to be available at all times, it has become challenging to keep all peices of vCenter running at all times. As we all know we now have more moving parts in vCenter. This session will cover the gotachs, and how you can secure your vCenter in order to keep the dependent services up at all times. I will be co-presenting this with a fellow vExpert James Bowling @vSential. Please vote for this session 4873.

Reports, Mashups and Analytics for your Datacenter – Session ID 5852

The other session is among my favorite topics to talk about. And what better place to talk about it than VMworld. Its the topic of reporting and analytics for your datacenter. We have found ourselves busy with a variety of different techniques to retrive the all importnat reports. Some of us get to use expensive tools that simply dont deliver or are too complicated to use. We also have the brave ones among us who would put powerCLI to work, and of course the good ole excel spreadsheets are never too far from a reporting discusion. We will be going over a different approach on retrieving critical information across your environment. This is not the one to miss. We will be unveiling some new capabilities of CloudPhysics for reports, mashups and analytics for your Datacenter. I plan to co-present this with another fellow vExpert Anthony Spiteri @anthonyspiteri. Please vote for this session 5852.

Obviously there are other great sessions that one should probably vote for as well. My suggestions are below. I think these are all great topics and would definitely be great addition to the VMworld this year. Good luck and I hope to see you all later this year.

5852 Reports, Mashups and Analytics for your Datacenter Bilal Hashmi, Verizon Business

Anthony Spiteri, Anittel

4873 vCenter: The Unsung Hero (Protecting the Core)  James Bowling, iland

Bilal Hashmi, Verizon Business

5778 Solid State of Affairs: The Benefits and Challenges of SSDs in… Steve Vu, ESRI

Irfan Ahmad, CloudPhysics

5818 How I maximized my vCloud ROI with vSphere Clustering Parag Mehta, Equinix

Jorge Pazos, City of Melrose

5854 Software Defined Data Centers  – Big Data Problem or Opportunity? Ariel Antigua, Universidad APEC

Bob Plankers, Univ of Wisconsin

5900 Flight Simulator for the VMware Software Defined Datacenter Michael Ryom, Statens It, Denmark

Maish Saidel-Keesing, Cisco

5892 The Decisive Admin:  How to make better choices operating and designing vSphere infrastructure John Blumenthal, CloudPhysics

Drew Henning, HDR Inc.

5859 Storage and CPU Noisy Neighbor Issues: Troubleshooting and Best Practices Krishna Raja, CloudPhysics

Maish Saidel-Keesing, Cisco

5823 You are not alone: community intelligence in a software defined future vSphere infrastructure Panel:Trevor Pott, eGeek Consulting

Bob Plankers, Univ of Wisconsin

Josh Folland (Moderator)

4872 Operating and Architecting a vMSC based infrastructure Duncan Epping, VMware

Lee Dilworth, VMware

4570 Ask the Expert VCDX’s Panel:Rick Scherer, EMC

Matt Cowger, EMC

Chris Colotti, VMware

Duncan Epping, VMware

Jason Nash, Varrow

 

 

  

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1914 2
More training than you can ask for http://www.cloud-buddy.com/?p=1910 http://www.cloud-buddy.com/?p=1910#comments Wed, 13 Mar 2013 17:20:47 +0000 http://www.cloud-buddy.com/?p=1910 Most of you must already be aware of what Trainsignal is. They have come up with all kinds of training videos in the past few years that have covered not just VMware topics, but topics around other competing and even complimenting technologies. This covers, Citrix, Microsoft Cisco to name a few.

It’s one thing to have a long list of training video’s but Trainsignal has really put out some top quality material. Only now they have made all this very very affordable. For only $49/month or if you prefer anual subscription thats only $468/year for all the training videos they offer. What are those courses? Here is a list.

Of course, you can stream all their videos online which I have been able to do in the past with no hiccups on my laptop and even my ipad. What they also have is offline viewing where you may download a video and view it offline like on a long flight perhaps. The only catch is for now the offline viewing is limited to windows and OS X platform only due to the silverlight dependency. Hopefully that functionality will come to mobile devices soon. Go ahead and sign up for more training than you will ever find the time for.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1910 0
CloudHook will hook you up – PHD Virtual 6.2 http://www.cloud-buddy.com/?p=1874 http://www.cloud-buddy.com/?p=1874#comments Thu, 07 Mar 2013 14:19:25 +0000 http://www.cloud-buddy.com/?p=1874 A few months ago I reviewed the backup tool from PHD Virtual for VMware. Yes, I specifically mentioned VMware because they have one for Citrix also. The company is on the verge of releasing their next major update 6.2 and I got an opportunity to do a sneak peak.

Last year I was completely new to PHD Virtual and had figure out all the moving parts, which weren’t many to be honest. The post from my last review is here. The one thing I loved about the product was its simplicity where I really didn’t have to sit down and figure out what was needed in order for it to work. I just followed the screen instruction and taadaaa!.. it worked.  So, I have huge expectations this year and won’t be going into the details of how to setup and install.

Install:

The overall procedure to do the install is still pretty much the same. You are working with two pieces here.

  1. VBA (appliance)
  2. Console (windows installer)

In order to get the backup tool going, you will install the console on a windows machine and deploy the VBA appliance. From that point some initial configurations are needed and you are all set to go. Configuration like the IP address for the VBA, storage to be used for backup, email notification, retention polices, write space etc.. What is write space? Good question, hold your horses.

You will also create your backup jobs here and kick off a manual or schedule a backup job.  All of this is pretty basic and I don’t plan on going into the details of how to complete these tasks. Once you have the VBA deployed, you will realize you don’t need anyone to blog about the how to steps. And if you really need that, click on the “question mark” on the top right of the console window and you will have access to all the help you will ever need. 

 

The documentation is embedded within the console and its pretty good and detailed.

So what’s the big deal?

When I reviewed this tool last year there were a few things that really impressed me. Its simplicity in deployment, creating backup jobs and recovering from backups. After all what’s the point of a backup tool if you are unable to use it to recover your data.

Just like before, your backups can be used to do both file level restore (FLR) and for the lack of a better term bare metal restore. What’s even better is the de-duplication ratio, which they market for it to be around 25:1 and my test last year, and this time around confirmed that as well.

So what’s the big deal? Are they simply re-releasing the older version with a new version number? Not really.

Instant Recovery:

One of the features I absolutely adore. This first came out with 6.0. What it does is pretty awesome.

The feature gives you the ability to recover a VM with little to no downtime. Yes you heard me right. Little to no downtime. How? What it does is pretty straightforward. Instead of retrieving from backup and doing a bare metal type restore, with Instant Recovery you are basically turning on the VM in a temporary data store that is created on the backup storage itself and presented to the ESXi host.

As soon as your VM turns one, you can simply storage vMotion it into the desired data store or its final resting place. But what if you are not licensed for storage vMotion? No problem, someone thought about that already. In comes “PHD motion” which will move your VM from the temporary storage to the production storage and yes it will merge all the changes during the move as well. All changed data is written to a place called “write space” which is used to make sure the PHD motion moves your VM to its final location with all the changes in place.

But there is new feature that the write space serves as well. And this by far is my favorite feature in this product. Now there is another type of recovery that I will be going over later in this post. This new recovery method can really work great with our next feature.

CloudHook:

For the last few years, everyone has fallen in love with the word cloud, even those of us who don’t really know what it means. Now that the definition of cloud is beginning to take some shape and it has been demystified for the most part, the next logical thing is to make use of cloud in our organizations.

Ever had to ship backup tapes to remote locations due to requirements that were set? In my experience dealing with tapes, tracking them and maintaining their rotation, delivering and receiving these bad boys isn’t something anyone looks forward to. But how else can we place our backups in a remote location and meet all the business requirements? Perhaps by storing our all-important data in the cloud?

That’s right –  all those things that you can do with the VBA today. Imagine if your underlying backup storage was not managed by your SAN/NFS admins but instead resided somewhere in the cloud. Guess what? It’s possible now. With PHD Virtual Backup 6.2 you now have the option to backup your data into the cloud. Popular storage providers like Amazon’s S3, Google’s Cloud storage, RackSpace, CloudFiles and OpenStack can all serve as your backup now.

Of course the next question is how hard is it to setup? Not hard at all. To give you an idea, I basically did a few tests on my local storage first before deciding to use up the limited bandwidth from my home connection. And it worked flawlessly. You are really not doing anything else besides telling the VBA where in the cloud this needs to be backed up.

 

 

Once you select your appropriate provider you will then provide some specific information like your access key ID, secret key and your bucket name. For my tests, I used the S3 option (This is not an endorsement. It just turned out to be the best option for me. Please do your own research). One gotcha right away was my bucket name. I was having issues getting my cloud info entered and it kept complaining about not being able to access the storage. Turns out if you are using S3 as your storage and your bucket name is all caps, you will have this issue. The fix is to not use caps. Obviously this was covered in the release notes that I was supposed to read but silly me. I wanted to point out this little known issue because I am sure there are more people out there like me who deploy first and read later or only when they hit a wall.

One more thing to keep in mind is the de-duplication ratio with PHD Vrtual. With cloud storage we are not only talking about disk space in the cloud but also the bandwidth to push that data around. I think this is where this product really comes in handy. It backsup all that you need without taxing you for resources as one may expect. And with cloud storage, that becomes even more important. 25:1 is what they market, and that is around the number that I have confirmed in my tests also.

Remember I mentioned another task that’s handled by the write space? So here it is. The write space is also used for caching data locally before shipping it out in the cloud. So the two jobs write space is responsible for are instant VM recovery and cloud storage backup.

So how much space should be allocated to the write space? That’s a good question and it really depends on what’s happening in your environment, how much data is being changed everyday and such. Below is extracted from the help document that discusses on how to figure out how big this space should be.

“A guideline for selecting the right size Write Space disk is to calculate at least 50% of your daily changed data, then attach a disk that can accommodate that size. Typically, changed data is about 5% of the total of all virtual disks in your environment that are being backed up. So for example, if you were backing up ten VMs every day that totaled 1 TB, you should attach a virtual disk that is at least 25 GB (Changed data is approximately 5% of 1 TB, which is 50 GB; of which 50% is 25 GB). Write Space can be expanded at any time by adding additional attached virtual disks.”

Of course, because this space will be shared by two tasks (cloud backup and instant recovery), it only makes sense to set thresholds on each task so that one doesn’t bully the other. With the slider in the configuration for write space, you can set how much of the write space can be used for cloud backup. The rest is used for Instant Recovery. Pretty simple aye!

Rollback Recovery:

With the backups being sent to the cloud, it would only make sense to have the ability to do restores from cloud as well. Most other products struggle to do this well for VMs, as they sometimes want you to pull all the backup files locally first, and then you can recover from them.

With Rollback recovery, you can restore your VMs to a previous point in time recovering only the changes from the selected backup over the existing VM. This feature first came out in 6.1. and will certainly compliment the CloudHook feature in 6.2. This obviously means a few things.

  • Restores will be super fast
  • They will consume less bandwidth as less data will be sent across your site and the cloud storage provider
  • And last but not they least, you can finally meet those RTO RPO that always seemed unrealistic

The only thing to note here is that its highly recommended to take a snapshot of the VM before doing a rollback recovery on it. With this approach if the communication between you and the cloud breaks, you can at least revert the VM back to its original state. Obviously the brains at PHD virtual already thought about this, which is why the default behavior is to take a snapshot.

How can you select between the types of restores that are available? Pretty simple, select the appropriate radio button in the screenshot below and move forward.

 

 

Of course, rollback recovery is also available if you don’t employ cloud storage for your backups. You can still use it for your on-site backup, I just used the example of cloud because it appears to sound more fruitful in the case where our backup data maybe sitting thousands of miles away.

Conclusion:

I am absolutely not a backup expert and this review is definitely not a full review of the entire features this product offers. I have basically extended on my previous review of the tool from a previous version and only discussed two new features that I think are very good. As I mentioned before, the best thing for me is the products simplicity where a person like me who does very little with backups and restore in his day job is able to deploy and test a product like this one.

Part of what’s happening in IT today is convergence of technology. With it, the next thing that will soon follow will be the convergence of roles we have today. With the silos being taken down all around us, its important that we as IT professionals are capable of exploring tools that do tasks other than what we specialize in. Though I may not be a backup expert, I can fully test and deploy a tool like this within minutes. This helps bring the convergence of roles to life to some degree.

This is a great solution for an environment that is a 100% virtualized. But if you are like most organizations that have at least a few physical machines (other than those oracle boxes hehehe), you should have some other ideas in mind about how to go about backing up the physical machines.  PHD Virtual backup is only for VMs. You are on your own for the physicals or will have to invest in another backup solution for that.

Each VBA can only let you select a single type of storage for backups. What this means is if you want your data to go to your cloud storage and also have a backup locally available, you will have at least two VBAs. One will backup data to the cloud the other will do it to a local NFS, CIFS or VMDK attached to the VBA. The good thing is you can manage both using the same console as both VBAs will be part of the same vCenter. The bad thing is that your VMs will be backed up twice in order to go to two different places. Would have been nice if there was a way to backup only once and send it to two destinations.  From what I have been informed, in Q2 with the release of 6.3 this issue will be addressed.

Cloud Storage is a great idea for storing your backups. Like everything else, one must do a cost benefit analysis figure out the amount of data changes, the bandwidth needed to push backups into the cloud, the cost for purchasing space in the cloud and most importantly the value of data that is being backed up. You have a few very good choices in service providers that work with the product already. In any case, I personally believe the tapes have served us well and now lets put them where they belong, in a museum that is. Using cloud storage for backup is definitely looking into the future and PHD Virtual backup does an excellent job in simplifying a very complex task. And with rollback recovery, restores also become very fast. After all, why backup when we cant restore within a workable window. I urge you to try out the product in your labs. You will be pleasantly surprised as I have been for sometime now. Just be sure to remind yourself that you are not a backup expert, this product has a tendency of making you feel like one. :)

 

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1874 5
Moving some vDatabases http://www.cloud-buddy.com/?p=1864 http://www.cloud-buddy.com/?p=1864#comments Wed, 20 Feb 2013 18:16:18 +0000 http://www.cloud-buddy.com/?p=1864 So at some point you will find yourself doing this. And most of us may have already done at least some of these tasks at least once already. Moving the databases of our all important applications. This is obviously not a complete list, I plan on adding more to the list as I come across more ‘stuff’. Hopefully this will come in handy when you are asked to produce a SOP for helping decommission your old DB servers. This will be extremely helpful if that request comes in on a Monday morning when you are coming back from a long vacation.

Obviously if its not already mentioned in the links below, dont forget to update your ODBC DSN (DataSource Name) settings if your application is using one (example vCenter, VUM etc). The links cover mostly what to do on the application side before/after moving the databases. Not so much on how to perform the data migration. You can rely on your DBAs for that.

Move vCenter Database

Move SSO Database

Move vCloud Database

Though I have never done this, my gut tells me that view composer DB move requires an ODBC update. I am looking for both the composer and event DB migration instructions to add to the list above. If anyone already has a link please share that with me.

As always, be sure to look at the versions in the links and the version you are working with in order to produce the desired results.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1864 0
vCartoon of the Week (01/31/2013) http://www.cloud-buddy.com/?p=1852 http://www.cloud-buddy.com/?p=1852#comments Thu, 31 Jan 2013 22:00:28 +0000 http://www.cloud-buddy.com/?p=1852

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1852 2
When SSO goes bye bye, things get interesting http://www.cloud-buddy.com/?p=1831 http://www.cloud-buddy.com/?p=1831#comments Tue, 29 Jan 2013 22:22:01 +0000 http://www.cloud-buddy.com/?p=1831 So until today I was under the impression that SSO only effects the web-client in 5.1. The way I understood was the vSphere client still behaves the way it did before and SSO is not engaged unless the web-client is used to login. This also brought me to the conclusion that if SSO goes down, one cannot login via the web-client but the vSphere client can still be used. Wrong!!

A colleague of mine pointed me to a this page that clearly states the following:

How does SSO integrate with the vSphere Client?

SSO does not integrate with the vSphere Client. However, when you log in through the vSphere Client, vCenter Server sends the authentication request to SSO.

Once I read that I started doubting my thought process and the importance of SSO in 5.1 Apparently all access to vCenter must be down once SSO is down (both via web and vSphere client).

After doing a lot testing this is what I found (vcenter 5.1 build 799731). When SSO is down,

  • access via web-client is down as expected 
  • access via vSphere client is flakey

What does flakey access mean? Well, I got mixed results and was finally able to see some pattern. When SSO service is down, I was able to login with the account that has had a successful login while SSO was running. The important thing here was, “use windows session credentials” had to be checked for which I had to be logged in with the account that had successfully logged in when SSO was up. If I didnt check the box and entered the credentials myself, it told me the username and pwd were incorrect. I know I can fat finger keys at times but I tested this over and over to come to this conclusion. It wasnt me. Access was only allowed when the check box was checked.

This also meant any new account that was created or granted access couldn’t login using the vSphere client. Rememeber we only had luck with accounts that were able to login succssfuly prior to SSO service going down. And that too required the checkbox to be checked. If the account was just created or granted access after SSO went down, the screen showed the beautiful message on the right. The same message was received if the account didn’t successfully login while SSO was up. Why cant this message say the SSO cannot be reached is beyond me. By the way the web-client will tell you “Failed to communicate with the vCenter Single Sign On server” when SSO is down. So thank you VMware for doing that. 

Another thing to keep in mind. When SSO service is down, your vCenter service continues to run. However, if you attempt to restart your vCenter service you will find yourself in trouble. I was unable to get the vCenter service to start with SSO being offline. Which makes SSO even more important. Yes even with vCenter down your VMs continue to work but there are other vCenter specific features that will not function like DRS, sDRS for example. And if this vCenter is connected to a vCloud instance thats another can of worms.

So the bottom line is, SSO is very very important. It has two parts, the application and the DB part. VMware has done a great job in giving the option to install SSO as single, clustered or even multi-site type deployments. The high availability in the application side is thought out there. However, the problem is DB. VMware does not fully support SSO DB on a SQL cluster. As a matter of fact, there have been known issues that have come about when trying to deploy SSO using a SQL cluster. So the real option with full support is a stand alone SQL node. But that also creates a single point of failure. When the DB goes down, you are unable to login using the web-client, you maybe able to login using the vSphere client and all other things we discussed above.

So building redundancy is extremely important. VMware’s recommended solution is to use vCenter Heartbeat. We all know that can be a pricey solution. However, if full support along with redundancy is importnat to you, that is the way to go. I hope VMware extends their full support to at least allow running DB on a SQL cluster for all their products including vCenter (which is still a grey area). That would be the right thing to do. Heartbeat provides added functionality and there will always be a market for that as well. I hope full support on DB residing on SQL clusters is not further delayed in the interest of the vCenter Heartbeat product.

In the end I will borrow Tom Petty’s words to tell VMware “Don’t do me like that”…

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1831 6
Kemp ESP – Microsoft Forefront TMG and beyond http://www.cloud-buddy.com/?p=1823 http://www.cloud-buddy.com/?p=1823#comments Mon, 28 Jan 2013 17:47:25 +0000 http://www.cloud-buddy.com/?p=1823 A fellow vExpert pointed me to a product that I think is pretty cool and will probably fit the need of many who have been scratching their heads after the end of sale for Microsoft Forefront Threat Management Gateway. With virtualization and cloud computing, we tend to think that security is no longer needed. In my opinion, security has become even more important and has to be reworked with the way infrastructure is deployed these days. Specially in a multi-tenant environment.

Anyways, sticking to the topic here. Kemp Technologies is introducing there Edge Security Pack (ESP) which will enable you to continue deploying solutions like Sharepoint  MX etc securely. I am not an expert in this area but I try to keep up with whats happening around me. You can read up more on the solution here. Remember, just because your solution is virtual or in a cloud, doesn’t mean you no longer need security. It’s still required, and it has to be up to par with todays technology. Below is the introductory video of what the Kemp ESP is expected to do.

 

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1823 0
Disappearing data after a migration – Access-based Enumeration http://www.cloud-buddy.com/?p=1807 http://www.cloud-buddy.com/?p=1807#comments Thu, 24 Jan 2013 23:26:06 +0000 http://www.cloud-buddy.com/?p=1807 One thing I love about windows is the fact that there is still a whole lot left for me to learn about windows. Not just big infra services that windows provides but little additions that have come about in the recent windows versions. Now I know what I am about to talk about is not exactly new, it was first introduced in Windows 2003 SP1. But this was something that fell off my radar and I didn’t really notice it until recently. So, I will try and publicize this in case it fell off your radar also.

So let’s say you have a share that users access. The share resides on a file server that runs Windows 2003. The share has 50 folders and not all users have access to all folders. When a user tries to access a folder to which access is denied, an access denied message appears. Awesome! Now you decide to take advantage of a newer OS and we will use Win 2008 R2 as an example. So you migrate the data over to the 2008 R2 file server and enable the share. Suddenly pigs starts flying,  your wife starts agreeing with you and Patriots start beating the Giants in the Super Bowl game. What happend?

So you start hearing things like, “Hey, I can only see 20 folders, before I was able to see 50″. You have people throw out all kinds of different numbers and your initial reaction is what in the world! Of course the folders are all there and you can confirm that when you login to the server yourself with your Administrative access. So what happened? Why are users not able to view all folders in the share? Well, your windows server just got a little more secure.

What you will start noticing is users are only able to see folders they have access to. So, if a user only has access to the “Finance” folder in the share, when this user accesses the share the only folder that will appear out of the 50 folders this share has is the “Finance” folder. Pretty nifty aye! So, if one doesn’t have access to a folder, the folder will be invisible. This is happening due to a feature called “Access-based Enumeration”. You can read more about it in this article. And yes, this is enabled by default. 

So the obvious question is, can this be disabled? Well without getting into why would you want to do this and all, the simple answer is YES. On the 2008 R2 file server, you will basically go to the properties of the share using the “Share and Storage Management” console.

Once there click the “Advanced” button in the “Sharing” tab and there it is. Unchecking the checkbox will disable this feature and your environment will be vulnerable once again. Your users will start seeing folders they dont have access to. My advise, leave it enabled. Why tease them when they can’t access it? :)

 

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1807 2
Multiple domains, vCenter and SSO http://www.cloud-buddy.com/?p=1769 http://www.cloud-buddy.com/?p=1769#comments Wed, 16 Jan 2013 19:40:24 +0000 http://www.cloud-buddy.com/?p=1769 Not having blogged for sometime due to everything else that I have been involved in lately, I figured the new years will help me make time for myself. Well, its been 16 days since the new years so I guess a late happy new year to you. I have been playing around with some of the new features in 5.1. Recently somebody asked me about ways to handle access to a vCenter via different domains. Few months ago I would have pointed them to an AD guy and said build some kind of a trust.

A lot has changed in a few months obviously. SSO has come around, and more importantly it works (finally :D). I know low blow, oh well. Using SSO one can add multiple identity sources (AD, Open LDAP etc). You can have multiple AD identity sources added to the same SSO. So what does that mean?

When you install SSO on a windows server, the domain this server is on automatically gets added as an identity source in SSO. You can then pick users and group from here and assign them privileges. In addition to that, you can also add additional domains to the identity sources and assign entities in this domain access to vCenter as well. And its so simple that even I can do it. But I figured I will still write up a few steps and summarize the process as I can very easily forget what I did to make this work.

  1. In order to add users/groups from a domain, the domain needs to be added to the SSO identity sources
  2. The domain the SSO server is on gets added to the identity sources (in my case, the SSO server is on a domain called cloud)
  3. Login to SSO with Admin permission (default user admin@system-domain works but you can use a different account with same privileges as well)
  4. Click on Administration
  5. Click on “Configuration” under “Sign-on and Discovery”
  6. Click the plus sign on the top left 


  7. Add the appropriate info (In my case, I am adding a domain called “sunny.bilalhashmi.com” with an alias “SUNNY”) – I am using IP for the server URL as I dont have DNS resolution between the SSO server and the SUNNY domain. Ideally you would want to use the DNS name.



  8. Once you have entered the info, test the connection and hit OK, you should be able to add users/groups from the new domain (SUNNY domain in my case)
  9. When adding new users/groups, click on the drop down and you should be able to see the new domain. Select the new domain add the new permissions and you should be all set. Why do I have “Cloud” along with other things in the list? As I mentioned in Step 2, “Cloud” is the domain my SSO server is on, so it gets added automatically. 



  10. Again, no trust requirements between the domain. This is all happening due to our new friend SSO.
Obviously you will have to make sure connectivity between the the new domain and the SSO etc is working. I am unsure about the exact permissions that is needed in AD for making the connection to SSO. Remember in step 7, we add the username and pwd for the account in the domain to make the connection. I used the Administrator account that had all kinds of access. I believe an account that has access to query the accounts should be sufficient, but I havent tested that aspect. 


Why do you have folks from multiple domains coming into the same vCenter? Good question. One use case would be a customer who is supported by another company for their VMware infrastructure. In this case, both the customer and the supporting team can use their own accounts to carry on business. I personally hate having multiple accounts. Thats a recipe for being the account  lockout champion.
]]>
http://www.cloud-buddy.com/?feed=rss2&p=1769 0
vCenter and the certificate saga http://www.cloud-buddy.com/?p=1763 http://www.cloud-buddy.com/?p=1763#comments Tue, 30 Oct 2012 18:21:20 +0000 http://www.cloud-buddy.com/?p=1763 So have you ever been through the process of replacing your vCenter and ESXi host default certificates? It’s not something to look forward to in my opinion. Now is this really necessary or not is beyond the scope of this post. But please dont replace the certificates just because you can.

So make that assessment for yourself. Just because you have the option doesn’t mean you have to do it. It is definitely more secure but then again the most secure network is the one with no user. Understand what it takes to manage the replacement certs, what does it mean for future hosts that need to be added, how will these certs be renewed, is their a compliance that you need that require you to have CA signed or self signed certs. All these are good questions to assess if this is the route you want to take. Also, this is not something new with 5.x, the ability to replace certificates have existed for a long time.

This morning, I came across Duncan’s post  where he compiled a list of very helpful links to those painful processes. Again, the process may not be as painful to some. It really depends on the size of the environment that could make this process either a few minute thing or a project within a project. While I was going through the KB articles, I remembered an awesome product I saw around the VMworld SF 2012 time. vCert Manager was by far the simplest way I have seen to manage certificates for vSphere. I don’t recall sharing this information earlier so I figured now would be a good time to do so. Below is an introductory video of what the tool is capable of doing. It is excepted to be released later this year.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1763 0
New Cards @CloudPhysics http://www.cloud-buddy.com/?p=1738 http://www.cloud-buddy.com/?p=1738#comments Fri, 05 Oct 2012 16:20:26 +0000 http://www.cloud-buddy.com/?p=1738 Some of you may remember I mentioned a pretty cool product by a company CloudPhysics about a month or so ago. In August I got an opportunity to meet the smart brains behind it, and did I mention their product also won the “VMworld Best Innovation” award.

As you may already know by now, CloudPhysics included a HA simulation tool which is my favorite card as of now (but I know many more are coming). What it does is saves you time and ultimately money. Ever wondered what your available capacity is and will be if you were to change your admission control setting? Well you can read a few posts I wrote about that topic here or here for example, or you can simply head over to CloudPhysics and save you a lot of time and pain. Its really that simple. And oh did I mention that it will factor in the version of vCenter/ESX(i) you are running so there should not be any gotchas.

Recently, CloudPhysics released two more cards:

  • VM reservations and Limits
  • Snapshots Gone Wild

I like these guys, the cards do exactly as their names suggest. The first one will look at your environment and list out any VM that may have a limit or a reservation. Pretty good aye! Whats even better is that it will flag any VM that may have limits of more than 50% or even less that 50% of whats configured. Both of these could pose an interesting situation. Luckily for me, I only have one reservation in my setup as its pointed out.

The other card is the snapshot police which will list out a list of VMs with their snapshot information. Information like when these were created, number of child, snapshot name to name a few. If you are getting excited about this, you have probably been burned by the snapshot brigade at least once or know what it means. A lot of times that happens beacuse there has not been a simple way to keep track of these. Well now you do.

Head over to CloudPhysics, there are over 900 cards suggested by the community and pretty soon we will start seeing a bunch of them making all our lives a lot simpler so we can do even bigger better things.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1738 0
Discovering vSphere web client 5.1 http://www.cloud-buddy.com/?p=1709 http://www.cloud-buddy.com/?p=1709#comments Wed, 12 Sep 2012 18:35:26 +0000 http://www.cloud-buddy.com/?p=1709 When the web client was first announced last year, I was really excited with what it had to offer. I was personally even more excited because I really thought, I would finally have no need to run a windows VM on my mbp.

Then I had a rude awakening which triggered me to write this post. Basically, even though you could access the web-client via the browser on OS X, there is still an element called the “Client Integration Plug-in” that cannot be installed on OS X. :(. This and a few more things just killed all my motivation to use the web-client. I figured if I was going to run a VM to access a fully functional client, I might as well run the C# client. But now things have changed a bit.

To start off, the client integration part has still not changed, it still does not fully work on OS X. I just tried it on 10.8 with no luck. So what does this really mean? Lack of inability to install this will keep at least two options away from you that I know of.

  1. Ability to launch the console of the VM
  2. Ability to transfer files to and from datastores
These are the only two that I am aware of, I am not sure if there are any other gotchas yet. But clearly these are not the end of the world. Another thing that I found disappointing was the inability to access VUM from the web-client. Seriously? Why?
So now that I am done with all the trash talking. Lets talk about why should you still get on the web-client bandwagon soon.
  1. I have been told, support for OS X is on the roadmap (that should address the client integration plugin issue)
  2. The VUM plug-in is coming soon
  3. All/most new enhancements of 5.1 are visible and accesible via the web-client ONLY
  4. 5.1 will be the last release for the traditional C# client we have used for all these years. Going forward, web-client is the only way.

So there, #4 should be enough to motivate one to move forward. But let’s keep the draconian approach out for a little bit. The web-client has had a major face lift and IMO has a much better UI. Sure, you will feel lost in the beginning but you will notice the web-client will provide you a much better user experience. The UI is well thought out and has a much better work flow.

To start off, one can now assign searchable tags to objects inside vCenter and yes one object can have more than one tag assigned to it. I am sure this will be a very helpful new feature in large environments.

How many times have you started doing something in vCenter only to realize you have to go back to a different object to get some info. But you can’t without canceling the wizard and what you are doing. In comes work in progress, this is a pretty handy way to address those situations where you can pause what you are doing and come back to it when you are ready.

Another one that I find very useful is the log browser. Basically it gives you the ability to view the logs for your vCenter and all yours hosts in one place. Isn’t that nice? Single pane of glass is what we always wanted. And yes it will let you see all the logs of your host, and you can switch between the type of logs using the drop down list as shown below.

 

These are simply some of the new features that the web-client has to offer among so many more. During the beta, I noticed the OS X compatibility issue but was hopeful for it to be fixed before GA. That didn’t happen but at least its on the road map. The VUM plugin is coming soon along with most other third party plug-ins. Keep in mind, dont be shocked if future third party plug-ins are only available for the web-client. It only makes sense for the vendors to develop for the client that will stay.

Lastly, if you want to make use of all the enhancements in 5.1, the web-client is a must. I was originally not too impressed by the original web client last year, but I must say the 5.1 is not just as good as the traditional c# client but better. Moreover as mentioned above, the traditional client is going away, now will be a good time to get to know your new friend.

]]>
http://www.cloud-buddy.com/?feed=rss2&p=1709 1