This is a new series that I’m putting together to help cover some of the best practices used through years of consulting in the Azure IaaS space, from other professionals in the community, and from relationships at Microsoft and events like Ignite. Of course, your mileage may vary, and one solution may not necessarily be the best solution for you.
I will share as many resources as possible in these posts, but cloud evolves quickly so be warned that links will expire, new features will be released (or deprecated), and best practices change over time (just look at how out-of-date the SCCM posts have gotten on this blog).
So let’s kick things off. Networking in Azure can be as simple or as complex as you’d like to make it – Azure is simply your canvas. And you don’t need to be Claude Monet to deploy resilient networks in Azure. Most networking principles from the datacenter generally apply to IaaS networking in the cloud, but there are additional considerations. I’ll outline some here, but also check out the virtual network planning guide from Microsoft.
I always recommend white-boarding your network before deploying it in Azure – this will allow you to clearly envision your environment. Determine what your requirements are and what you need to achieve in your cloud deployment. Will you be connecting your cloud network to other datacenters or clouds? What kind of traffic will be flowing through your network and what are your public endpoints? Which resources are production-level and what needs redundancy? What’s your budget and what type of subscription will you be building your resources in? Backup? Disaster recovery? You don’t have to determine all of these initially, but how you start building your resources can determine your network and infrastructure landscape moving forward.
Azure IaaS introduces more responsibilities for the traditional administrator, but it’s not all bad. You can automate, orchestrate, and standardize so much more in the cloud versus hosted datacenters. A couple examples – rather than pushing updates with SCCM, you can use Update Management in the Azure portal to deploy updates to your servers with the same level of precision. You can use Log Analytics and Azure Monitoring to monitor services running in Azure rather than installing SCOM servers. Azure Backup runs as a service and can back up entire VM and SQL instances without any additional server infrastructure. By automating infrastructure operations, you allow yourself to work smarter and spend more time building.
Here are a few topics that I always give consideration when building out networks in Azure.
What region(s) is the best fit for you Azure infrastructure? Several factors can determine where you actually want your data to live in the cloud (even though you will never physically touch it). All Azure regions are connected through Microsoft’s dark fiber backbone, but your entry point to cloud is still an important decision. If your infrastructure requires the highest level of availability, then consider a region that has Availability Zones (current list here) that come along with higher SLAs. Availability Zones allow you to spread your resources across multiple Azure datacenters within the same region – protecting you in the scenario that a single datacenter becomes unavailable (planned and unplanned maintenance does happen from time to time).
All regions allow redundancy within single datacenters using Availability Sets. Not every product is available in every Azure region – use this tool to determine which region is best for you before discovering a feature is unavailable. Pricing varies considerably between regions, especially for compute (VM) resources. The Azure Calculator is an excellent tool to estimate costs and there are third party options that provide a better visual comparison among regions. It might just come down to what region has the lowest latency to your physical location – you can use this Azure Speed test tool and this Azure Latency test tool to determine your closest region.
Consider the Basics
Your Virtual Network IP address space can be as small or as large as you’d like, and you can change it after the fact. It’s usually safer and more manageable to start small. Just make sure your address spaces don’t overlap on networks that will be connected. Subnets can be deployed within the address space that you choose. You don’t have to fill your entire address space with subnets and you can change them down the road.
Subnets are great for organizing your Virtual Network, but it’s helpful to know that they are open for communication by default. For proper network segmentation, use Network Security Groups to segment traffic between subnets and network interfaces.
Some network appliances require their own subnet to function. This is due to the scalability available in several appliances like WAFs and Gateways – they can scale up and use additional instances when traffic increases.
You may also want to consider if you’ll be using any Virtual Appliances from the Azure Marketplace with your Virtual Network. It’s much easier to deploy Virtual Appliances at the same time as your Virtual Network. Otherwise, you will need to get comfortable with using ARM templates to attach Virtual Appliances to existing resources in Azure. Most firewall vendors such as Cisco and Palo Alto have ARM templates published in public GitHub repositories to deploy their Virtual Appliances in Azure.
Generally speaking, the naming of resources in Azure is set in stone. If using a consistent naming scheme is important to you, document it out as early as possible. Be sure to identify naming limitations since different types of resources allow different types of characters. Storage accounts can only be lowercase and alphanumeric, while VMs can contain dashes for example. A full list of naming requirements can be found here.
You might actually find tagging to be much more beneficial than ironing out a naming convention for all of your resources. Creating classes and properties for tagging tends to make life easier. It can greatly decrease the time it takes to identify a resource, find the owner, categorize it in the proper cost center come monthly invoice time, etc.
How you connect to your virtual network, and how it integrates with your existing topology, is one of the more important factors in building out your network in Azure.
While it’s not advisable in most situations, you can connect directly to Azure Virtual Networks through Public IPs. These connections should be explicitly allowed using NSGs and closely monitored. An easy example would be allowing RDP access directly to a VM through a public IP. Should you do it? Probably not. Can you do it? Absolutely. A good NSG practice would be to utilize the default rule to block all inbound traffic to your virtual network, then explicitly allow inbound RDP traffic from the IP address you’ll be connecting from.
Virtual Network Gateways enable several different types of connections to your Virtual Network in Azure. Site-to-site connections (also commonly referred to as SSL/VPN tunnels) are a simple routable connection between two networks using a public IP on both sides. This has commonly been used by organizations with multiple sites even before cloud existed. These connections traverse the public Internet and are prone to disruption. You can easily connect Azure virtual networks using site-to-site connections and Virtual Network Gateways, but it’s actually much easier to do so with peering.
Peering is very easy to implement and allows you to route directly between Virtual Networks in the same or different region. Peering is simpler, cheaper, and lower latency than using site-to-site tunnels. Peering also utilizes Azure’s dark fiber backbone, so it does not traverse the public Internet or use Public IPs. You can quickly and easily connect several virtual networks in Azure by using peering and then configuring routing tables for cross-network connections.
Point-to-site connections allow single endpoints (such as a Windows or Mac client) to connect to a Virtual Network in Azure. They still use the Virtual Network Gateway for connectivity and pull internal IP addresses from a pool set inside the configuration. Connections are secured either through certificates or a Radius server. The most common example for point-to-site connections is just VPN-ing from home to your Azure Virtual Network – the same way you would traditionally connect to your datacenter network from home.
ExpressRoute is the ultimate connection between on-premises and your cloud Virtual Network, but also the most costly. It also has a few more prerequisites and takes a bit longer to configure due to the physical line that gets installed. ExpressRoute uses a dedicated circuit between your datacenter, local service provider, and the Azure fiber backbone. You can use this connection for more than just your cloud traffic – you can also route Office 365 and Internet traffic through ExpressRoute if desired. One of the best advantages of ExpressRoute is that it’s a private connection that does not rely on the public Internet – it is a fiber connection directly to Azure and therefore carries a higher SLA. ExpressRoute uses a Virtual Network Gateway to register the connection from your service provider once established.
Azure Virtual WAN is a newer offering that is definitely worth mentioning. It follows the SDWAN industry trend by extending it to the Azure public cloud. It’s a simple way to connect multiple office locations and Azure virtual networks over a centrally managed configuration.
There are several marketplace virtual appliances that function the same way a traditional perimeter device would in your datacenter. They are software-based and run on compute instances in Azure, allowing you to easily configure VPN connections the same way you would on-premises. These offerings typically cost more than a native Virtual Network Gateway, but they can actually be easier to configure and allow additional configuration options such as types of VPN tunnels (L2TP, PPTP, SSH, SSL, etc) or allowing a higher number of tunnel connections. They also allow for vendor consistency across your hybrid environment.
Network security in Azure is a big enough topic to get its own post (coming soon). I’d recommend giving the official article on Azure Network Security a good read.
As a start, minimize and secure your endpoints. You wouldn’t deploy a server in a datacenter without a firewall – don’t do it in the cloud. Public IPs are exactly that – public to the Internet and exposed to all sorts of malicious Internet traffic. A lot of people are surprised by this very simple demo: deploy a server in the public cloud and allow RDP access through a public IP. You will immediately begin to see failed login attempts in the security audit log from outsiders. If you leave ports open to the public Internet, you are relying solely on Windows Server security and the strength of your credentials for protection. Most security analysts wouldn’t take that risk. We’ll go through some known vulnerabilities and how to mitigate them in a future post.
If you do need to use a public-facing endpoint in Azure, inspect that traffic using a WAF or Firewall. You can use the native Azure native offerings or you can use 3rd party marketplace appliances that run on Azure compute instances. The Azure-native WAF is very effective and uses OWASP rule sets, but is limited to protecting just HTTP/HTTPS traffic on port 80/443. The Azure native firewall is went public in 2018 and is relatively new, but provides features comparable to a fully-featured firewall like NAT rules and access policies, but at a cost higher than the WAF. 3rd party marketplace firewalls are typically more feature-rich and come with the highest cost. Several of them have BYOL (bring your own license) pricing available for long-term use. In short, there are just too many vulnerabilities on HTTP and HTTPS (let alone thousands of other ports/protocols) to leave them exposed directly to the Internet without packet inspection.
Network Security Groups are essential to lock down your network in Azure. While they don’t provide packet inspection, they provide a strict ACL on network traffic going to, from, and within your Virtual Networks in Azure. NSGs permit traffic over specific ports to an IP, VM, or Virtual Network. It’s important to know that, by default, Azure Virtual Networks are open internally and don’t restrict traffic between subnets. They do block all inbound traffic from external sources by default, however. NSGs enable proper network segmentation (see below for an example of this kind of topology). They are associated to network interfaces used by virtual machines by default, but are actually easier to manage when associated to an entire subnet.
A network monitoring tool is essential for network security. I’d recommend deploying and getting familiar with Azure Network Watcher. It has the ability to run connectivity checks, log traffic, and provide you insight on what kind of traffic is actually traversing your Virtual Networks.
Azure is a platform built on identity. It’s important to secure all accounts that are used to access Azure – either directly to an endpoint or through Azure Resource Manager. This means securing admin accounts using features like multi-factor authentication and least-privilege access when possible at the subscription and resource group level in Azure.
Finally, it’s definitely worth checking the Azure Security Center regularly for actionable recommendations to secure your cloud environment. This includes recommendations to secure your servers, Azure AD management, network, endpoints, and so many items that easily get overlooked.