Is Cryptocurrency Mining on Azure N-Series Profitable? And How To Do It Anyway

By | December 15, 2017

The cryptocurrency craze is real, with BitCoin and other currencies surging in recent months. While the long-term viability of these are questioned by some, cryptocurrency mining has been going on for years and is the catalyst for how coins are distributed. Personally, I’ve mined cryptocurrency at home with my own ASIC and GPU equipment, but I’ve always wanted to test the viability of mining in the cloud. There are already several sites that sell out hashing power to buyers looking to mine cryptocurrency, and there are plenty of sellers willing to lease out their compute power to do so.

BitCoin and several other cryptocurrencies can be mined through varying methods, but ASIC mining is most efficient when available for a particular coin. GPU mining can be especially profitable on cryptocurrencies that are ASIC-resistant (where ASIC mining is not allowed in an effort to avoid a hardware arms race). CPU mining is barely a blip on the radar, as it is the least efficient method for cryptocurrency mining.

So how does this tie into Azure? About a year ago, Microsoft announced N-series virtual machines that actually have a healthy amount of GPU power. These instances are particularly useful for certain applications like 3D rendering, artificial intelligence, medical research, and CUDA-intensive computing.  GPU’s are much more powerful than what CPU’s can do in these scenarios because of the type of algorithms they can run. These same algorithms can be used for hashing power to mine cryptocurrency quite well. Running this in Azure isn’t particularly difficult and is exactly the same method as running it in-house, but I do want to test the best case scenario to evaluate the profitability of running mining operations in Azure.

First, we need to look at what GPU’s are available in Azure. At the time of this posting, there are two GPU’s available to use with N-series instances in Azure – NVIDIA Tesla K80 and Tesla M60. Since the M60 (NV SKU) is the more recent generation, we will be testing with those. The NV instances include the following SKU’s:

Spinning up these instances in Azure is simple enough, but what cryptocurrency variation will be most profitable? I like to use a site called, which actually compares the mining profitability of cryptocurrencies by factoring in several variables like market value, mining difficulty, power consumption, and your specific hardware. This is incredibly useful info when looking for what to mine. For this test, we will evaluate two of the most profitable cryptocurrencies available (at the time of this post) that use two different mining algorithms – MonaCoin, which uses the Lyra2RE2 algorithm, and Zencash, which uses the Equihash algorithm.  Both of these algorithms are designed for NVIDIA hardware, which suit our Azure instances well. To mine MonaCoin, we will use a miner called CCMiner; for Zencash, we will use a different miner called Zec Miner. Though these cryptocurrencies may provide little value to you, they can always be traded for your coin of your choice on several online exchanges. And yes, USD is included if fiat money is more your style.

In Azure, we will deploy a Windows Server VM in the West US 2 region. You can find regions that support N-Series instances through Microsoft’s regional availability site. If you have trouble finding the NV SKU, try switching your storage type from SSD to HDD. Be sure to monitor your usage closely since this SKU gets rather expensive.

I have deployed a simple NV6 Windows Server 2016 instance for this test. This size has one M60 GPU attached, so it should be straightforward to gauge performance. SKU’s with multiple GPU’s are available but scale up pretty evenly in mining performance.

Once the VM is deployed, download and install the latest NVIDIA drivers for the Tesla M60 – they do not come installed by default.

From here, you simply run your cryptocurrency miner using the same string you would use normally. This string will vary depending on your mining pool, algorithm, and username, but they generally look something like this:

  • CCMiner: ccminer -a lyra2v2 -o stratum+tcp:// -u username -p password
  • Zec Miner: miner.exe –server miningpool.coim –user username –pass password –port 3618

And now… the results.

For the MonaCoin/Lyra2RE2/CCMiner test, the NV6 SKU was able to mine at a respectable 21Mh/S.

At current market rates, this would result in a payout of around 4.8 MonaCoins/month, or about $66/month – far below the $1004/month for the Azure VM.

Comparatively, a single NVIDIA GTX 1080 TI GPU would mine MonaCoin at around 63MH/s.

At current market rates, the 1080 TI GPU would result in a payout of around 14 MonaCoins/month, or about $202/month.

For the ZenCash/Equihash/Zec Miner test, the NV6 SKU was able to mine at 285 Sol/s.

At current market rates, this would result in a payout of around 2.2 Zen/month, or about $71/month – again, far below the $1004/month for the Azure VM.

Comparatively, a single NVIDIA GTX 1080 TI GPU would mine ZenCash at around 730 Sol/s.

At current market rates, the 1080 TI GPU would result in a payout of around 5.6 Zen/month, or about $182.44/month.

Here’s all the results neatly compiled into a table.

As expected, the hash rate of the NVIDIA Tesla M60 running in Azure is far too low to pay for the NV6 instance size – you would lose around 80-90% of your investment to the Azure subscription cost. This could be due partly to mining algorithms being optimized more towards consumer GPU’s, as the M60 and K80 are designed for workstation loads. Unless you have some free compute to burn through in Azure, I wouldn’t recommend using these SKU’s for *profitable* cryptocurrency mining. A better investment would be to buy hashing power directly from those willing to sell it, acquire your own mining hardware, or just buy the cryptocurrency instead in hopes of future gains.

But hey, it was fun to set up!

How to Configure Azure Site Recovery for VMware

By | September 5, 2017

****02-21-2018 Update – Microsoft updated the ASR Configuration Server deployment process in February 2018. Instead of deploying a server and running the Configuration Server unified setup, you now download a OVF template from the recovery vault in Azure and import it directly into vCenter. This actually deploys a new Configuration Server and takes care of several of the prerequisites for you.


Azure Site Recovery is a powerful tool to use as a low-cost disaster site recovery or even for migration of physical/virtual servers to the cloud.  This guide walks through how to configure Azure Site Recovery using a Configuration Server on-prem, which then allows you to move VMware servers to Azure.

First, create a new Resource Group in Azure. In this guide, our resource group will contain everything related to ASR, including the VMs when failover occurs.

There is much network infrastructure planning that goes into ASR. This article does a great job of covering your options. If you want your entire infrastructure to be available in the cloud, where all traffic is redirected to the same IP address during a failover (but a public routing change is made to redirect traffic), then assign the same IPs that you use on-prem to your virtual network in Azure. This effectively gives you a master switch to reroute all traffic to your cloud DR site if needed. Alternatively, you can use different IPs to selectively fail over items as needed. This may be a better option if you use a site-to-site tunnel to Azure from your datacenter, as the IPs cannot overlap in that design. That is the configuration used in this example – notice the and subnets. In Azure, choose the closest region to your datacenter (try out and create a Virtual Network while scoping out your desired subnet and IP range.

Next, we will create a storage account blob that will store recovery data used by ASR. Choose your desired replication, location, and continue. Standard storage and general purpose storage both work fine for ASR.

Create a Recovery Services Vault in Azure and put it in your ASR resource group. Click Prepare Infrastructure and choose your protection goal. In this case, we will be backing up virtual machines from VMware on-prem to Azure.

Next, we will prepare the source. Since we don’t have System Center Virtual Machine Manger deployed, we will use the Configuration Server. This must be installed on Server 2012 R2 on-prem, and will act as a processor to continuously back up VMs to ASR. Download the setup and vault credentials in step 3 and 4.

Run the Configuration Server setup and choose to install the Configuration Server and process server.

Enter the path to the vault credentials you downloaded on the next step.

Configure a proxy server if you server does not have a direct connection to the Internet. ASR works directly over the Internet or proxy and does not require a VPN connection to Azure.

Check that you pass all prerequisites and continue.

Choose a MySQL password and continue the wizard.

Choose yes when asked to protect VMware virtual machines. This may require you to install VMware tools and the vSphere PowerCLI if they are not already present on the Configuration Server.

Choose an install location for the ASR Configuration Server and choose the network interface used for replication traffic on the Configuration Server. This is whatever interface has access to the Internet.

Allow setup to complete. It should take 15 minutes or so.

Save the passphrase generated at the end of the installation. This will be used as an approval mechanism when ASR agents are deployed. You can also regenerate the passphrase later if desired.

When the installation is complete, you will be prompted to reboot the server – do that before continuing. After reboot, launch the ASR Configuration tool from Start Menu or just type in “cspsconfigtool.exe” from command prompt.

Add an account that has administrative permissions to the VMs you want to migrate from VMware. More granular permissions settings are available here. Once complete, close the configuration tool.

Go back to the Azure portal and go to Step 2 (Prepare). Your Configuration Server and VMware account should show in the Azure portal after 15 minutes from being added on the local server. Click “+vCenter” and enter the local IP/hostname of the VMware host server in your datacenter. This configures the connection from Azure.

Once the VMware host server is added, you should be able to complete step 2.

On step 3, enter the resource group and subscription that you’d like to use for ASR. The blob storage account and Virtual Network created earlier in this guide will be populated if they’re already in the same resource group. If not, you will need to add them.

On step 4, click Create and Associate.

You will now create a backup policy. The RPO threshold controls how often data recovery points are created. The Recovery Point Retention controls how long data recovery points are stored (your limit is 24 hours if you used premium storage for your blob, or 72 hours if you used standard storage). The default values work fine if you’re not sure what to use, and new policies can be created and applied later.

Your replication policy should now be created and associated, and you can continue to Step 5.

Step 5 is a friendly reminder to plan for your network bandwidth and storage. In short, you should balance the frequency of backup snapshots with the storage and bandwidth requirements of your environment. If snapshot jobs take longer to run, you may want to reduce their frequency so they don’t overlap.

The ASR agent is also referred to as the Mobility Service, which can be deployed several ways. In this guide, we will install it manually using the GUI on our target server which will be restored to ASR. For all of the ways to install the ASR agent, check

On the Configuration Server, the installation files for the ASR Agent/Mobility Service are located under:


Copy them to the target server and run the binary that matches the OS of the target system.

Launch the setup and choose Install Mobility Service.

Enter the IP address for your Configuration Server and enter the passphrase that was generated at the end of the Configuration Server setup process.

Give the agent an installation path and Proceed to Configuration, which will complete the setup process.

The agent will check into the Configuration Server and be available in the Azure portal within 15 minutes or so.

Go back to your Recovery Services Vault in Azure and choose Replicate in the top menu.

Choose your source settings which use the Configuration Server and the VMware host account.

In Step 2, configure your recovery target. These settings control where a restored VM will reside once it has been failed over to Azure. Use the Resource Group, subnet, and blob storage account from earlier.

In Step 3, check the box next to the VMs that you want to be able to restore to Azure Site Recovery. These will need the ASR agent installed if you have not done so already. In this guide, we will use just one target server.

In Step 4, configure the VM settings for the restored VM in Azure. These are properties like the size of the VM, managed disks, etc. These settings can be modified later if you’re not sure what to use yet.

On step 5, you can choose the Replication Policy that you’d like to use, which we created in a previous step. Enable Multi-VM consistency if you are backing up multiple VMs that share the same workloads.

Now that replication has been configured, you can see every VM being protected under Replication Items under the Recovery Services Vault. Each item will have a synchronization status.

If you click one of the replicated items, you can view/change the properties set during initial configuration, but you can also choose to Test Failover, which will spin up an instance of the protected VM in Azure within minutes. Choosing Failover will perform a real production failover.

Once the failover completes, you should see the replicated VM running in your Resource Group. Using the test failover method will leave the original VM intact and append the name of the Azure VM with a “-test”.

From here, you can utilize your Disaster Recovery site in Azure Site Recovery or use ASR as a migration tool to easily move your systems up to Azure.

Automating the Removal of Old Office Versions to Upgrade to 2016

By | February 2, 2017

The end-of-life for the click-to-run version of Office 2013 is quickly approaching (February 28th, 2017). This is a quick reference on how to automate the deployment of Office 2016 to your environment, while also fulfilling the prerequisite of removing any previous versions of Office (including 2013).

Step 1 – Automate the uninstall of previous versions of Office

Installing Office 2016 will not do this on its own, unfortunately. There are several ways to uninstall previous Office versions, but the most reliable I have found in my experience is to use the available OffScrub scripts from Microsoft, which can be extracted from the EasyFix uninstallers for Office 2003, 2007, and 2010. For Office 2013 and 2016, a separate script can be run to automate the uninstall using O15CTRRemove.diagcab. All scripts can be combined and run from a single package/program using SCCM. There is a great guide available from Jay Michaud on how to do all of this:

Step 2 – Automate the installation of Office 2016

There are several guides on how to use the Office 2016 Deployment Tool, which allows you to download the Office 365 client installation files and package them up for deployment. This reference guide contains all available commands to customize the XML file which controls how Office 2016 is downloaded, installed, and configured. The final step is to package it up for deployment in SCCM. All of these steps are outlined here:

Step 3 – Deploy both packages simultaneously with Configuration Manager

Of course, you will want to run step 1 and step 2 together to minimize the amount of time that users are without Office on their systems. You can deploy sequential applications in SCCM by using software packages (setting the uninstall program to always run first in the install program properties), by using software applications (setting a software dependency for the uninstall script to run prior to install), or by using a task sequence that contains all of the steps (task sequences can do more than just deploy an OS, after all). As always (and especially with multi-step software deployments), be sure to test deployment with a few pilot systems before running it for all of production.

Microsoft has done a good job of making Office settings/profiles migrate easily to new versions, and the same is true for 2016. Outlook will automatically upgrade any existing mail profiles when run for the first time and should not require any special configuration from the user.

SQL Query to Export All SCCM Maintenance Windows

By | December 13, 2016

Maintenance windows tend to be illusive in Configuration Manager, especially in large environments with multiple admins. A common request that I receive from my customers is to “retrieve all of maintenance windows.” There isn’t a great report for this or an easy way to do this with PowerShell (yet). The easiest method I’ve found to do this with the most detailed information is through a simple SQL Query on the CM database.


SELECT c.Name, c.Comment,SW.IsEnabled, SW.CollectionID, c.MemberCount, SW.Description, SW.StartTime, SW.Duration

FROM v_ServiceWindow SW

JOIN v_Collection C ON C.CollectionID = SW.CollectionID


The results of this query will give you the Name, Comments, Date/Time/Frequency, and Duration for every maintenance window in your environment, as seen below.



In-Console Updates Stuck Installing in ConfigMgr: How To Fix It

By | October 19, 2016

Disclosure: do not follow these steps if you do not know what you’re doing. They should only be used as a last resort. Use with caution.

I’ve had a couple different SCCM environments get stuck during update installations that have come down through the new Updates and Servicing feature in the current branch builds. Typically, this does not happen but I have yet to determine the root cause. It’s very important to note that updates that come down through the console can take a significant amount of time to install and you should be patient. I would recommend waiting several hours for them to complete while checking dmpdownloader.log for the real-time status. You should also close and reopen the SCCM console before taking any action – it may just be waiting for you to relaunch the console to install a newer version of the console.

This fix will help you if your hotfix updates are stuck in the Installing state, like so:


There were no actions available to resolve the state of these hotfixes from the SCCM console, and restarting the SMS_EXECUTIVE service and CONFIGURATION_MANAGER_UPDATE services and the server itself did not help in this case, either. It actually looked like the hotfixes successfully applied weeks ago, yet the state had not been updated. The workaround for this was to change the status of these hotfixes directly in the CM database, which should be considered a last resort in any scenario. There’s a simple SQL query to do this from SQL Server Management Studio, which originated from this TechNet article from an earlier Technical Preview version:

EXEC spCMUSetUpdatePackageState N'd26be618-1df5-4680-a65f-03cec6abc7ec', 262146, N''

You will need to modify the above query with the metadata string from the corresponding update. To find which string to use, go to your SCCM installation directory open the EasySetupPayload folder that serves as the cache for in-console updates, and find the folder name for the hotfix that is stuck. In this case, I ran the query twice – once for each hotfix that was stuck installing.


After executing the query and refreshing the SCCM console, the status was cleared immediately and I was able to proceed with the install the latest branch release.