How to Configure Azure Site Recovery for VMware

By | September 5, 2017

Azure Site Recovery is a powerful tool to use as a low-cost disaster site recovery or even for migration of physical/virtual servers to the cloud.  This guide walks through how to configure Azure Site Recovery using a Configuration Server on-prem, which then allows you to move VMware servers to Azure.

First, create a new Resource Group in Azure. In this guide, our resource group will contain everything related to ASR, including the VMs when failover occurs.

There is much network infrastructure planning that goes into ASR. This article does a great job of covering your options. If you want your entire infrastructure to be available in the cloud, where all traffic is redirected to the same IP address during a failover (but a public routing change is made to redirect traffic), then assign the same IPs that you use on-prem to your virtual network in Azure. This effectively gives you a master switch to reroute all traffic to your cloud DR site if needed. Alternatively, you can use different IPs to selectively fail over items as needed. This may be a better option if you use a site-to-site tunnel to Azure from your datacenter, as the IPs cannot overlap in that design. That is the configuration used in this example – notice the 10.255.242.0/24 and 20.255.252.0/24 subnets. In Azure, choose the closest region to your datacenter (try out azurespeed.com) and create a Virtual Network while scoping out your desired subnet and IP range.

Next, we will create a storage account blob that will store recovery data used by ASR. Choose your desired replication, location, and continue. Standard storage and general purpose storage both work fine for ASR.

Create a Recovery Services Vault in Azure and put it in your ASR resource group. Click Prepare Infrastructure and choose your protection goal. In this case, we will be backing up virtual machines from VMware on-prem to Azure.

Next, we will prepare the source. Since we don’t have System Center Virtual Machine Manger deployed, we will use the Configuration Server. This must be installed on Server 2012 R2 on-prem, and will act as a processor to continuously back up VMs to ASR. Download the setup and vault credentials in step 3 and 4.

Run the Configuration Server setup and choose to install the Configuration Server and process server.

Enter the path to the vault credentials you downloaded on the next step.

Configure a proxy server if you server does not have a direct connection to the Internet. ASR works directly over the Internet or proxy and does not require a VPN connection to Azure.

Check that you pass all prerequisites and continue.

Choose a MySQL password and continue the wizard.

Choose yes when asked to protect VMware virtual machines. This may require you to install VMware tools and the vSphere PowerCLI if they are not already present on the Configuration Server.

Choose an install location for the ASR Configuration Server and choose the network interface used for replication traffic on the Configuration Server. This is whatever interface has access to the Internet.

Allow setup to complete. It should take 15 minutes or so.

Save the passphrase generated at the end of the installation. This will be used as an approval mechanism when ASR agents are deployed. You can also regenerate the passphrase later if desired.

When the installation is complete, you will be prompted to reboot the server – do that before continuing. After reboot, launch the ASR Configuration tool from Start Menu or just type in “cspsconfigtool.exe” from command prompt.

Add an account that has administrative permissions to the VMs you want to migrate from VMware. More granular permissions settings are available here. Once complete, close the configuration tool.

Go back to the Azure portal and go to Step 2 (Prepare). Your Configuration Server and VMware account should show in the Azure portal after 15 minutes from being added on the local server. Click “+vCenter” and enter the local IP/hostname of the VMware host server in your datacenter. This configures the connection from Azure.

Once the VMware host server is added, you should be able to complete step 2.

On step 3, enter the resource group and subscription that you’d like to use for ASR. The blob storage account and Virtual Network created earlier in this guide will be populated if they’re already in the same resource group. If not, you will need to add them.

On step 4, click Create and Associate.

You will now create a backup policy. The RPO threshold controls how often data recovery points are created. The Recovery Point Retention controls how long data recovery points are stored (your limit is 24 hours if you used premium storage for your blob, or 72 hours if you used standard storage). The default values work fine if you’re not sure what to use, and new policies can be created and applied later.

Your replication policy should now be created and associated, and you can continue to Step 5.

Step 5 is a friendly reminder to plan for your network bandwidth and storage. In short, you should balance the frequency of backup snapshots with the storage and bandwidth requirements of your environment. If snapshot jobs take longer to run, you may want to reduce their frequency so they don’t overlap.

The ASR agent is also referred to as the Mobility Service, which can be deployed several ways. In this guide, we will install it manually using the GUI on our target server which will be restored to ASR. For all of the ways to install the ASR agent, check https://docs.microsoft.com/en-us/azure/site-recovery/vmware-walkthrough-install-mobility

On the Configuration Server, the installation files for the ASR Agent/Mobility Service are located under:

C:\ProgramData\ASR\home\svsystems\pushinstallsvc\repository

Copy them to the target server and run the binary that matches the OS of the target system.

Launch the setup and choose Install Mobility Service.

Enter the IP address for your Configuration Server and enter the passphrase that was generated at the end of the Configuration Server setup process.

Give the agent an installation path and Proceed to Configuration, which will complete the setup process.

The agent will check into the Configuration Server and be available in the Azure portal within 15 minutes or so.

Go back to your Recovery Services Vault in Azure and choose Replicate in the top menu.

Choose your source settings which use the Configuration Server and the VMware host account.

In Step 2, configure your recovery target. These settings control where a restored VM will reside once it has been failed over to Azure. Use the Resource Group, subnet, and blob storage account from earlier.

In Step 3, check the box next to the VMs that you want to be able to restore to Azure Site Recovery. These will need the ASR agent installed if you have not done so already. In this guide, we will use just one target server.

In Step 4, configure the VM settings for the restored VM in Azure. These are properties like the size of the VM, managed disks, etc. These settings can be modified later if you’re not sure what to use yet.

On step 5, you can choose the Replication Policy that you’d like to use, which we created in a previous step. Enable Multi-VM consistency if you are backing up multiple VMs that share the same workloads.

Now that replication has been configured, you can see every VM being protected under Replication Items under the Recovery Services Vault. Each item will have a synchronization status.

If you click one of the replicated items, you can view/change the properties set during initial configuration, but you can also choose to Test Failover, which will spin up an instance of the protected VM in Azure within minutes. Choosing Failover will perform a real production failover.

Once the failover completes, you should see the replicated VM running in your Resource Group. Using the test failover method will leave the original VM intact and append the name of the Azure VM with a “-test”.

From here, you can utilize your Disaster Recovery site in Azure Site Recovery or use ASR as a migration tool to easily move your systems up to Azure.

Automating the Removal of Old Office Versions to Upgrade to 2016

By | February 2, 2017

The end-of-life for the click-to-run version of Office 2013 is quickly approaching (February 28th, 2017). This is a quick reference on how to automate the deployment of Office 2016 to your environment, while also fulfilling the prerequisite of removing any previous versions of Office (including 2013).

Step 1 – Automate the uninstall of previous versions of Office

Installing Office 2016 will not do this on its own, unfortunately. There are several ways to uninstall previous Office versions, but the most reliable I have found in my experience is to use the available OffScrub scripts from Microsoft, which can be extracted from the EasyFix uninstallers for Office 2003, 2007, and 2010. For Office 2013 and 2016, a separate script can be run to automate the uninstall using O15CTRRemove.diagcab. All scripts can be combined and run from a single package/program using SCCM. There is a great guide available from Jay Michaud on how to do all of this: https://www.deploymentmadscientist.com/2016/02/08/deploying-microsoft-office-2016-removing-old-versions/

Step 2 – Automate the installation of Office 2016

There are several guides on how to use the Office 2016 Deployment Tool, which allows you to download the Office 365 client installation files and package them up for deployment. This reference guide contains all available commands to customize the XML file which controls how Office 2016 is downloaded, installed, and configured. The final step is to package it up for deployment in SCCM. All of these steps are outlined here: https://www.systemcenterdudes.com/sccm-2012-office-2016-deployment/.

Step 3 – Deploy both packages simultaneously with Configuration Manager

Of course, you will want to run step 1 and step 2 together to minimize the amount of time that users are without Office on their systems. You can deploy sequential applications in SCCM by using software packages (setting the uninstall program to always run first in the install program properties), by using software applications (setting a software dependency for the uninstall script to run prior to install), or by using a task sequence that contains all of the steps (task sequences can do more than just deploy an OS, after all). As always (and especially with multi-step software deployments), be sure to test deployment with a few pilot systems before running it for all of production.

Microsoft has done a good job of making Office settings/profiles migrate easily to new versions, and the same is true for 2016. Outlook will automatically upgrade any existing mail profiles when run for the first time and should not require any special configuration from the user.

SQL Query to Export All SCCM Maintenance Windows

By | December 13, 2016

Maintenance windows tend to be illusive in Configuration Manager, especially in large environments with multiple admins. A common request that I receive from my customers is to “retrieve all of maintenance windows.” There isn’t a great report for this or an easy way to do this with PowerShell (yet). The easiest method I’ve found to do this with the most detailed information is through a simple SQL Query on the CM database.

query

SELECT c.Name, c.Comment,SW.IsEnabled, SW.CollectionID, c.MemberCount, SW.Description, SW.StartTime, SW.Duration

FROM v_ServiceWindow SW

JOIN v_Collection C ON C.CollectionID = SW.CollectionID

ORDER BY c.Name

The results of this query will give you the Name, Comments, Date/Time/Frequency, and Duration for every maintenance window in your environment, as seen below.

queryresults

 

In-Console Updates Stuck Installing in ConfigMgr: How To Fix It

By | October 19, 2016

Disclosure: do not follow these steps if you do not know what you’re doing. They should only be used as a last resort. Use with caution.

I’ve had a couple different SCCM environments get stuck during update installations that have come down through the new Updates and Servicing feature in the current branch builds. Typically, this does not happen but I have yet to determine the root cause. It’s very important to note that updates that come down through the console can take a significant amount of time to install and you should be patient. I would recommend waiting several hours for them to complete while checking dmpdownloader.log for the real-time status. You should also close and reopen the SCCM console before taking any action – it may just be waiting for you to relaunch the console to install a newer version of the console.

This fix will help you if your hotfix updates are stuck in the Installing state, like so:

su1

There were no actions available to resolve the state of these hotfixes from the SCCM console, and restarting the SMS_EXECUTIVE service and CONFIGURATION_MANAGER_UPDATE services and the server itself did not help in this case, either. It actually looked like the hotfixes successfully applied weeks ago, yet the state had not been updated. The workaround for this was to change the status of these hotfixes directly in the CM database, which should be considered a last resort in any scenario. There’s a simple SQL query to do this from SQL Server Management Studio, which originated from this TechNet article from an earlier Technical Preview version:

EXEC spCMUSetUpdatePackageState N'd26be618-1df5-4680-a65f-03cec6abc7ec', 262146, N''

You will need to modify the above query with the metadata string from the corresponding update. To find which string to use, go to your SCCM installation directory open the EasySetupPayload folder that serves as the cache for in-console updates, and find the folder name for the hotfix that is stuck. In this case, I ran the query twice – once for each hotfix that was stuck installing.

su2

After executing the query and refreshing the SCCM console, the status was cleared immediately and I was able to proceed with the install the latest branch release.

su3

 

WSUS Synchronization Failures in SCCM with HTTP Status 503

By | September 19, 2016

I ran into a new error today during a WSUS synchronization for SCCM Software Updates. Synchronizations had been running fine for a while, but it would fail after running for an extended amount of time. The error was easy to find in the wsnycmgr.log file in the Configuration Manager logs:

wsus

Usually when synchronization fails, it does so immediately due to WSUS not being configured properly, WSUS missing a hotfix, or not being mapped to the proper ports in IIS.

After a bit of research, I found a very useful article saying that the WSUS Application Pool in IIS may be running out of memory during synchronization. To help identify this issue, you will see the 503 error in wsyncmgr.log, and the Application pool for WSUS will be stopped in IIS when it fails:

image1_thumb_4c714962

To fix the issue, you can set the Private Memory Limit to 4000000 or 8000000 as recommended in the article and restart the application pool. You can then trigger a manual synchronization and monitor the log again.

image6_2e26926c

So far in testing this change in other environments, it appears that it can significantly improve performance and cut down on those sync times as well.