Category Archives: Azure

Is Cryptocurrency Mining on Azure N-Series Profitable? And How To Do It Anyway

The cryptocurrency craze is real, with BitCoin and other currencies surging in recent months. While the long-term viability of these are questioned by some, cryptocurrency mining has been going on for years and is the catalyst for how coins are distributed. Personally, I’ve mined cryptocurrency at home with my own ASIC and GPU equipment, but I’ve always wanted to test the viability of mining in the cloud. There are already several sites that sell out hashing power to buyers looking to mine cryptocurrency, and there are plenty of sellers willing to lease out their compute power to do so.

BitCoin and several other cryptocurrencies can be mined through varying methods, but ASIC mining is most efficient when available for a particular coin. GPU mining can be especially profitable on cryptocurrencies that are ASIC-resistant (where ASIC mining is not allowed in an effort to avoid a hardware arms race). CPU mining is barely a blip on the radar, as it is the least efficient method for cryptocurrency mining.

So how does this tie into Azure? About a year ago, Microsoft announced N-series virtual machines that actually have a healthy amount of GPU power. These instances are particularly useful for certain applications like 3D rendering, artificial intelligence, medical research, and CUDA-intensive computing.  GPU’s are much more powerful than what CPU’s can do in these scenarios because of the type of algorithms they can run. These same algorithms can be used for hashing power to mine cryptocurrency quite well. Running this in Azure isn’t particularly difficult and is exactly the same method as running it in-house, but I do want to test the best case scenario to evaluate the profitability of running mining operations in Azure.

First, we need to look at what GPU’s are available in Azure. At the time of this posting, there are two GPU’s available to use with N-series instances in Azure – NVIDIA Tesla K80 and Tesla M60. Since the M60 (NV SKU) is the more recent generation, we will be testing with those. The NV instances include the following SKU’s:

Spinning up these instances in Azure is simple enough, but what cryptocurrency variation will be most profitable? I like to use a site called, which actually compares the mining profitability of cryptocurrencies by factoring in several variables like market value, mining difficulty, power consumption, and your specific hardware. This is incredibly useful info when looking for what to mine. For this test, we will evaluate two of the most profitable cryptocurrencies available (at the time of this post) that use two different mining algorithms – MonaCoin, which uses the Lyra2RE2 algorithm, and Zencash, which uses the Equihash algorithm.  Both of these algorithms are designed for NVIDIA hardware, which suit our Azure instances well. To mine MonaCoin, we will use a miner called CCMiner; for Zencash, we will use a different miner called Zec Miner. Though these cryptocurrencies may provide little value to you, they can always be traded for your coin of your choice on several online exchanges. And yes, USD is included if fiat money is more your style.

In Azure, we will deploy a Windows Server VM in the West US 2 region. You can find regions that support N-Series instances through Microsoft’s regional availability site. If you have trouble finding the NV SKU, try switching your storage type from SSD to HDD. Be sure to monitor your usage closely since this SKU gets rather expensive.

I have deployed a simple NV6 Windows Server 2016 instance for this test. This size has one M60 GPU attached, so it should be straightforward to gauge performance. SKU’s with multiple GPU’s are available but scale up pretty evenly in mining performance.

Once the VM is deployed, download and install the latest NVIDIA drivers for the Tesla M60 – they do not come installed by default.

From here, you simply run your cryptocurrency miner using the same string you would use normally. This string will vary depending on your mining pool, algorithm, and username, but they generally look something like this:

  • CCMiner: ccminer -a lyra2v2 -o stratum+tcp:// -u username -p password
  • Zec Miner: miner.exe –server miningpool.coim –user username –pass password –port 3618

And now… the results.

For the MonaCoin/Lyra2RE2/CCMiner test, the NV6 SKU was able to mine at a respectable 21Mh/S.

At current market rates, this would result in a payout of around 4.8 MonaCoins/month, or about $66/month – far below the $1004/month for the Azure VM.

Comparatively, a single NVIDIA GTX 1080 TI GPU would mine MonaCoin at around 63MH/s.

At current market rates, the 1080 TI GPU would result in a payout of around 14 MonaCoins/month, or about $202/month.

For the ZenCash/Equihash/Zec Miner test, the NV6 SKU was able to mine at 285 Sol/s.

At current market rates, this would result in a payout of around 2.2 Zen/month, or about $71/month – again, far below the $1004/month for the Azure VM.

Comparatively, a single NVIDIA GTX 1080 TI GPU would mine ZenCash at around 730 Sol/s.

At current market rates, the 1080 TI GPU would result in a payout of around 5.6 Zen/month, or about $182.44/month.

Here’s all the results neatly compiled into a table.

As expected, the hash rate of the NVIDIA Tesla M60 running in Azure is far too low to pay for the NV6 instance size – you would lose around 80-90% of your investment to the Azure subscription cost. This could be due partly to mining algorithms being optimized more towards consumer GPU’s, as the M60 and K80 are designed for workstation loads. Unless you have some free compute to burn through in Azure, I wouldn’t recommend using these SKU’s for *profitable* cryptocurrency mining. A better investment would be to buy hashing power directly from those willing to sell it, acquire your own mining hardware, or just buy the cryptocurrency instead in hopes of future gains.

But hey, it was fun to set up!

How to Configure Azure Site Recovery for VMware

****02-21-2018 Update – Microsoft updated the ASR Configuration Server deployment process in February 2018. Instead of deploying a server and running the Configuration Server unified setup, you now download a OVF template from the recovery vault in Azure and import it directly into vCenter. This actually deploys a new Configuration Server and takes care of several of the prerequisites for you.


Azure Site Recovery is a powerful tool to use as a low-cost disaster site recovery or even for migration of physical/virtual servers to the cloud.  This guide walks through how to configure Azure Site Recovery using a Configuration Server on-prem, which then allows you to move VMware servers to Azure.

First, create a new Resource Group in Azure. In this guide, our resource group will contain everything related to ASR, including the VMs when failover occurs.

There is much network infrastructure planning that goes into ASR. This article does a great job of covering your options. If you want your entire infrastructure to be available in the cloud, where all traffic is redirected to the same IP address during a failover (but a public routing change is made to redirect traffic), then assign the same IPs that you use on-prem to your virtual network in Azure. This effectively gives you a master switch to reroute all traffic to your cloud DR site if needed. Alternatively, you can use different IPs to selectively fail over items as needed. This may be a better option if you use a site-to-site tunnel to Azure from your datacenter, as the IPs cannot overlap in that design. That is the configuration used in this example – notice the and subnets. In Azure, choose the closest region to your datacenter (try out and create a Virtual Network while scoping out your desired subnet and IP range.

Next, we will create a storage account blob that will store recovery data used by ASR. Choose your desired replication, location, and continue. Standard storage and general purpose storage both work fine for ASR.

Create a Recovery Services Vault in Azure and put it in your ASR resource group. Click Prepare Infrastructure and choose your protection goal. In this case, we will be backing up virtual machines from VMware on-prem to Azure.

Next, we will prepare the source. Since we don’t have System Center Virtual Machine Manger deployed, we will use the Configuration Server. This must be installed on Server 2012 R2 on-prem, and will act as a processor to continuously back up VMs to ASR. Download the setup and vault credentials in step 3 and 4.

Run the Configuration Server setup and choose to install the Configuration Server and process server.

Enter the path to the vault credentials you downloaded on the next step.

Configure a proxy server if you server does not have a direct connection to the Internet. ASR works directly over the Internet or proxy and does not require a VPN connection to Azure.

Check that you pass all prerequisites and continue.

Choose a MySQL password and continue the wizard.

Choose yes when asked to protect VMware virtual machines. This may require you to install VMware tools and the vSphere PowerCLI if they are not already present on the Configuration Server.

Choose an install location for the ASR Configuration Server and choose the network interface used for replication traffic on the Configuration Server. This is whatever interface has access to the Internet.

Allow setup to complete. It should take 15 minutes or so.

Save the passphrase generated at the end of the installation. This will be used as an approval mechanism when ASR agents are deployed. You can also regenerate the passphrase later if desired.

When the installation is complete, you will be prompted to reboot the server – do that before continuing. After reboot, launch the ASR Configuration tool from Start Menu or just type in “cspsconfigtool.exe” from command prompt.

Add an account that has administrative permissions to the VMs you want to migrate from VMware. More granular permissions settings are available here. Once complete, close the configuration tool.

Go back to the Azure portal and go to Step 2 (Prepare). Your Configuration Server and VMware account should show in the Azure portal after 15 minutes from being added on the local server. Click “+vCenter” and enter the local IP/hostname of the VMware host server in your datacenter. This configures the connection from Azure.

Once the VMware host server is added, you should be able to complete step 2.

On step 3, enter the resource group and subscription that you’d like to use for ASR. The blob storage account and Virtual Network created earlier in this guide will be populated if they’re already in the same resource group. If not, you will need to add them.

On step 4, click Create and Associate.

You will now create a backup policy. The RPO threshold controls how often data recovery points are created. The Recovery Point Retention controls how long data recovery points are stored (your limit is 24 hours if you used premium storage for your blob, or 72 hours if you used standard storage). The default values work fine if you’re not sure what to use, and new policies can be created and applied later.

Your replication policy should now be created and associated, and you can continue to Step 5.

Step 5 is a friendly reminder to plan for your network bandwidth and storage. In short, you should balance the frequency of backup snapshots with the storage and bandwidth requirements of your environment. If snapshot jobs take longer to run, you may want to reduce their frequency so they don’t overlap.

The ASR agent is also referred to as the Mobility Service, which can be deployed several ways. In this guide, we will install it manually using the GUI on our target server which will be restored to ASR. For all of the ways to install the ASR agent, check

On the Configuration Server, the installation files for the ASR Agent/Mobility Service are located under:


Copy them to the target server and run the binary that matches the OS of the target system.

Launch the setup and choose Install Mobility Service.

Enter the IP address for your Configuration Server and enter the passphrase that was generated at the end of the Configuration Server setup process.

Give the agent an installation path and Proceed to Configuration, which will complete the setup process.

The agent will check into the Configuration Server and be available in the Azure portal within 15 minutes or so.

Go back to your Recovery Services Vault in Azure and choose Replicate in the top menu.

Choose your source settings which use the Configuration Server and the VMware host account.

In Step 2, configure your recovery target. These settings control where a restored VM will reside once it has been failed over to Azure. Use the Resource Group, subnet, and blob storage account from earlier.

In Step 3, check the box next to the VMs that you want to be able to restore to Azure Site Recovery. These will need the ASR agent installed if you have not done so already. In this guide, we will use just one target server.

In Step 4, configure the VM settings for the restored VM in Azure. These are properties like the size of the VM, managed disks, etc. These settings can be modified later if you’re not sure what to use yet.

On step 5, you can choose the Replication Policy that you’d like to use, which we created in a previous step. Enable Multi-VM consistency if you are backing up multiple VMs that share the same workloads.

Now that replication has been configured, you can see every VM being protected under Replication Items under the Recovery Services Vault. Each item will have a synchronization status.

If you click one of the replicated items, you can view/change the properties set during initial configuration, but you can also choose to Test Failover, which will spin up an instance of the protected VM in Azure within minutes. Choosing Failover will perform a real production failover.

Once the failover completes, you should see the replicated VM running in your Resource Group. Using the test failover method will leave the original VM intact and append the name of the Azure VM with a “-test”.

From here, you can utilize your Disaster Recovery site in Azure Site Recovery or use ASR as a migration tool to easily move your systems up to Azure.

Azure AD Connect 1.1 Released with Several New Features

azure-active-directoryAzure AD Connect 1.1 (formerly DirSync) is now generally available for download. If you’ve been using Azure AD Connect, you’ll want to pay attention to the new features that come in 1.1.

Automatic Upgrade

This is the last time you need to manually upgrade Azure AD Connect. There is a new auto-update feature that will periodically perform upgrades.

More Frequent Synchronizations

In the past, the default sync interval was 3 hours. Now, you can schedule a sync to run as often as every 30 minutes, if desired.

Support for MFA

This is a big one. Previously, accounts that used multi-factor authentication could not be used with Azure AD Connect. This was a huge security risk because the account used by Azure AD Connect had to be a global administrator on your tenant. In the new release, MFA is now supported to better secure your service accounts.

More Flexibility

You can now configure with OUs to synchronize with your tenant during the installation process. Previously, you had to install Azure AD Connect and then later filter the OUs in the Synchronization Service Manager.

You can also modify the user sign-in method after installation now. Previously, you had to choose this during the install of Azure AD Connect and didn’t have the option to modify it later without reinstalling.

Azure AD Account Support Coming to Windows 10

One of several big Windows 10 announcements from Ignite last week was the integration of Azure AD and Windows 10. If you’re not familiar with Azure AD Sync Services (formerly DirSync), it allows the synchronization of user accounts (and passwords) between your local Active Directory environment and Azure where those credentials can be automatically be used to provision Office 365 email accounts, for instance. This type of federation allows Office 365 users to sign in directly to their organizational email accounts directly from

Starting with Windows 10, users will be able to log into Windows using the same organizational account. This is similar to how you can use a Microsoft account (formerly Windows Live ID) right now, but it also brings additional management capabilities along with it. The initial sign-in process can be done right out of the box on new devices without any prior device deployment/management or being part of a domain. Essentially, this will allow users to provision their own devices. MDM policies can be applied to systems, SSO is be enabled for cloud applications (Lync/Skype for Business, Outlook, etc), and OS state roaming is available to synchronize settings (WiFi, wallpaper, OS settings) automatically between devices. Basically, this is the ultimate BYOD scenario.

During the Window setup experience, users will be able to choose “This device belongs to my organization” to sign into Azure AD.


Next, they can use their Azure AD credentials to sign in, just like Office 365.


If a matching tenant is found for the domain, users can proceed to sign in through ADFS or Azure AD.


MDM enrollment happens next.


Now the user would be signed into their organizational account on their Window 10 system. Pretty impressive considering that they provisioned it all by themselves, right? Having a single, federated account for all services and devices has some pretty big potential down the road.

For more information on this new feature in Windows 10, see this blog post on TechNet.

How To Upload and Run a Windows 10 Enterprise VM in Azure

Running a workstation OS in the cloud may not be the most practical solution at this time, but it may prove useful in some test lab scenarios. While Azure does support plenty of server OS options that you can choose from a gallery and have up and running within minutes, Windows 7 and 8 images are currently only available to MSDN Subscribers. Azure does provide the capability to upload your own VHD to run on their platform, though. In this guide, we will create a Hyper-V VM with Windows 10 Enterprise Preview, prepare the VHD and upload it to Windows Azure, and connect to the Windows VM for use in the cloud. We will be using Windows 10 in this guide, but the steps are the same for Enterprise versions of Windows 7 and Windows 8.

First, you should know that there are a few catches. This is not supported by Microsoft at all, so don’t expect any help from them if you need to open a case. No worries- we’re all IT enthusiasts, here. Also, whichever version of Windows that you choose for use in Azure needs to be the Enterprise version. If you try to upload Windows 7 Ultimate or Windows 8.1 Pro for example, Azure will not prepare the VM properly and it will get stuck in a provisioning state. This happens because Azure uses a specific unattend.xml file on the backend for VM deployment, which you don’t have access to. Second, Azure is not cheap. I have a test lab environment that I use in Azure because of how easy it is to access from anywhere without needing any hardware, and I like to get my hands dirty with Azure. That being said, I always leave my VMs turned off unless I am using them. Be sure to shut down your VMs from the Azure portal, and not from within the OS itself, or you will still be charged for uptime! There are PowerShell scripts available to properly spin down Azure systems– I’d recommend looking into these for any production environments. This is server grade hardware, so it can cost you hundreds to thousands of dollars each month if you’re not careful. I’d recommend using the Azure Pricing Calculator to get a good idea of what it will cost. I’d also recommend setting up the free trial of Azure that comes with $200 to spend on its services- more than enough to get you up and running in this guide. This guide also assumes some level of Azure proficiency- if you are not familiar with Azure, I highly recommend using this Test Lab guide from Microsoft to get up and running.

To begin the process, we will create a VM in Hyper-V. I am using the Hyper-V feature in Windows 8.1, but any version of Hyper-V will do. Before you create the VM, save yourself some work later and create a VHD to run the VM on since Azure requires that format. You can convert a VHDX drive to VHD drive, but it’s much less work to do it correctly front. By default, Hyper-V in Windows 8 and Server 2012 uses the VHDX format. Another requirement of Azure is using a fixed/static VHD size, and not a dynamically expanding size, which most VMs default to now. Lastly, the VM needs to be an integer- no decimals in the VHD size.

In Hyper-V Manager, create a new Virtual Hard Disk and choose the VHD format.


Choose a fixed size VHD.


Use an integer for the VHD size. I’d recommend using 20GB as a minimum. Azure will automatically expand the size to whatever performance tier you use for the VM later.


Now, create a new VM from Hyper-V Manager.


If presented with the option, choose to create a Generation 1 VM. The other specifications don’t matter, but give it an appropriate amount of memory and a connection to the Internet if you are patching the system.


When asked, specify to install the OS on the VHD we already created.


Now, mount the ISO for your Enterprise edition of Windows and proceed to install the OS.


When the VM boots up and you are greeted with the Windows Setup screen, hit SHIFT + F10 to bring up a command prompt. This step is optional, but it will prevent Windows from creating a recovery partition, which is useless in Azure.

The commands used to properly format the disk for installation are:

  • Diskpart
  • Select Disk 0
  • Clean
  • Create Partition Primary
  • Format Quick FS=NTFS
  • Active
  • Assign


Now, proceed with the install as normal. Accept the EULA, choose the custom installation option, and install on the available partition.


Proceed with the installation, allowing the system to reboot as necessary.


When the Installation gets to the Settings screen, do not proceed as usual. When you see the settings screen, hold CONTROL + SHIFT + F3 at the same time on your keyboard. This will force Windows setup to enter Audit Mode, which is specifically made for capturing images. This does a couple things- it will skip the personalization steps built into Windows 7 and greater, and it will log you in automatically as the administrator without creating a user account.


Windows will reboot and automatically log in as the administrator. You will see a window pop up for System Preparation every time Windows boots- this is OK. Close it for now- we will run it on our own later.


Now, patch Windows and install any applications that you want to be present on your VM. If using Windows 10 Preview, you may want to check if any newer builds are available in PC Settings.


Now is also a good time to add an additional local administrator account to the VM. We will use this account to RDP into the system once it is running in Azure. This isn’t always necessary, but in my experience it has been required for the Windows 10 Preview. Be sure to remember the username, set a secure password (it will be directly accessible in the cloud), and add the user to the Administrators group on the local system.


When Windows is patched and in your desired state for capture, launch Run, and type in “sysprep”. This will open the directory containing the Sysprep executable.


Run Sysorep.exe.


In the Sysprep options, choose the OOBE, check the box to Generalize, and choose to shutdown the system when finished.


Sysprep will now run and prepare the image to be used for deployment. When finished, the VM will be turned off.


From here, you can delete the VM in Hyper-V – we only need the VHD file that the OS was installed on. Copy the VHD somewhere convenient, but do not launch it in a VM, or you will need to repeat the Sysprep steps.

We will now prepare Azure to receive your new VHD file. If you don’t have a small test lab configured yet in Azure, you may want to follow this guide from Microsoft to get up and running first. At the very least, you will need a Cloud Service, Storage Account, and Virtual Network configured.

To create a new storage service in Azure (if you don’t have one already), choose New on the bottom menu, choose Data Services, then Storage. Choose Quick Create, and name your URL something unique. Choose a region that is close to you, also.


Now, choose your new Storage from the left. In this example, mine is called “hefflab”.


Create a new Container in your Storage to hold the VHD we will be uploading. Choose Containers, then choose Add.


Name your Container something like VHD, since that’s what we’ll be using it for.


Your new Container should now be visible under Storage.


We will use PowerShell for the remaining operations to upload the VHD. If you haven’t installed the Azure PowerShell Module yet, do that first. When finished, launch Microsoft Azure Powershell module. Use Add-AzureAccount to connect this PowerShell session with your Azure account.


Once authenticated to your Azure account, use the following command to download your Publish Settings File:



This will open a browser and download a file named after your Azure subscription that ends in “publishsettings”. Save this somewhere close. Now, we will import this file into your current PowerShell session using the following command:

Import-AzurePublishSettingsFile <Path To PublishSettings File>


Next, we need find the URI path to your container to send the VHD file. You can see this under the Container view on the Azure portal. It will look something like this:


Now, use the following command to upload the VHD into Azure. The Destination is your container name plus the name of your new VHD file. The LocalFilePath is where the VHD currently resides on your local system. Be patient- this cane take some serious time.

Add-AzureVhd -Destination <Container URL/VHD> -LocalFilePath <path to local VHD>

2015-01-19_14-38-38Grab some coffee. This PowerShell command will actually skip empty space in the VHD, but definitely do this where you have some good upstream bandwidth.


When complete, your PowerShell module should look like this:


Now we will create a Virtual Machine Image from the uploaded VHD in the Azure portal. Go to the Images tab of the Virtual Machines section. Choose to create an image.


Fill in the name for your VM image and browse your container to find the uploaded VHD. Check the box indicating that you ran Sysprep on the machine, also.


The image will now be listed under OS images.


Using the New button on the lower menu, choose to create a new Virtual Machine from Gallery.


The Windows 10 x64 Enterprise image is now available to deploy a VM from.


From here, you can customize the VM as desired, but be sure to read up on Azure pricing! The virtual machine will launch and take a bit of time to provision, but you should be able to RDP into it by using the Connect button on the bottom menu.


When prompted for credentials, you can try either the account you used when deploying the VM in Azure, or you can use the local administrator account that we created right before capturing the VHD. You may need to enter the computer name has the domain, such as hostname\localadmin.


And from here, it should work just like a standard RDP session.


I hope this post was helpful! Leave a comment if I can answer anything for you.