Resources
    Azure Practical: Peer-to- ...
    17 October 19

    Azure Practical: Peer-to-Peer Transitive Routing

    Posted byTracy Wallace
    facebooktwitterlinkedin
    news-featured

    This blog is Part One of a two-part series on facilitating transitive routing over Azure virtual network peering connections. Here in Part One, I will discuss the need for transitive routing, some advantages and disadvantages of the architecture, and alternatives to transitive routing.

    What is Peer-to-Peer Transitive Routing?

    In Azure, peer-to-peer transitive routing describes network traffic between two virtual networks that is routed through an intermediate virtual network. For example, assume there are three virtual networks - A, B, and C. A is peered to B, B is peered to C, but A and C are not connected. For network traffic to get from A to C, it would have to travel through network B. This action is transitive routing.

    A common topology in Azure is the "hub and spoke" networking architecture. This topology contains a central "hub" virtual network connected to an on-premises network, via either a VPN gateway or an ExpressRoute circuit. Additional "spoke" virtual networks that support distinct workloads, are connected to the "hub" virtual network via peering connections. This topology supports hybrid networking while providing network segmentation and delegated administration.

    AzureHubSpoke1

    A typical "hub and spoke" network topology

     

    As you can see in the diagram above, the "hub" network is connected to the on-premises network via a VPN gateway or ExpressRoute gateway, while all "spoke" virtual networks have peering relationships with the "hub" virtual network. Built-in routing and gateway transit (or BGP if using ExpressRoute) provide the following routing flows from Azure automatically:

    • All "spoke" virtual networks and on-premises
    • "Hub" virtual network and all other networks (on-premises and "spoke")

    This topology does not directly support communication between the "spoke" virtual networks. In many architectures it is necessary for some or even all of the "spoke" networks to communicate. For example, a company may host a customer API server in one "spoke" virtual network and a central database server in another. If the API needs to communicate with the database, how would they facilitate it? One option is to simply add a peering relationship between the API and data virtual networks:

    AzureHubSpoke2

    This is fine for a small number of "spokes", but the number of peerings increases exponentially (n*(n-1)) as more "spoke" virtual networks are added. For example, if there are 99 networks and they all need to communicate, there will be 9,900 peering connections.

    An alternate approach is to add custom routing to the topology. This eliminates the need to establish peering relationships between the "spoke" networks. The connection topology is the same as the standard "hub and spoke", with a network virtual appliance (NVA) acting as a router in the "hub" virtual network, and routing rules added to the "spoke" virtual networks via route tables.

    AzureHupSpoke3

    This topology routes all traffic between "spokes" through the "hub" network and the router NVA. The "hub" provides transitive routing for all inter-spoke communication.

    Advantages

    There are several advantages to this topology. Among them are:

    • Traffic Control - Traffic between "spokes" is controlled via custom routing rules associated with each "spoke" virtual network and the router appliance added to the "hub" virtual network.
    • Administrative Simplicity - Depending on the routing rules, you can easily implement either a "whitelist" or a "blacklist" approach to routing. To configure a "whitelist" architecture, only add routing rules for specific communication. For example, say you have three "spoke" virtual networks - spoke1, spoke2, spoke3. You want spoke1 to communicate with spoke2, but not spoke3. You would simply add a routing rule for spoke3 to the route table associated with spoke1. Alternatively, to support a "blacklist" approach, you can add a rule that forwards all "spoke" traffic to the NVA router and then add rules that kill traffic on specific paths. This will automatically include new virtual networks into the routing topology.
    • Functionality - In addition to basic routing rules, the NVA router may implement more advanced functionality such as firewalling.

    Disadvantages

    As with any choice, there are some drawbacks to consider when implementing a routed "hub and spoke" topology. These include:

    • Traffic Cost - You pay for every byte that traverses a peering connection. When a packet travels through the "hub" as it transits from one "spoke" to another, it passes across two peering connections, thus being charged twice. Fortunately, these costs are generally small and may not have a significant impact.
    • Router Appliance Cost - The NVA router will run as a virtual machine which will accrue charges. You will also be responsible for the licensing costs of any software running on the NVA virtual machine. Sizing the virtual machine properly and reserving it can mitigate this cost.
    • Performance - By implementing the router, you are introducing another hop in the communication between "spoke" virtual networks. There is also overhead associated with the NVA router itself. Properly sizing and optimizing the virtual machine hosting the router can mitigate much of this overhead.

    Alternatives

    While it is possible and relatively easy to configure transitive routing through a "hub and spoke" topology, it may not be the optimal solution in all cases. There are alternatives to implementing peer-to-peer transitive routing:

    • Define limited peering relationships - As mentioned earlier, a full peering configuration in a "hub and spoke" topology may require a prohibitive number of peering relationships. However, there is typically not a need for communication between all "spoke" virtual networks. Peering relationships can be established and managed on an as-needed basis.
    • Implement a shared service virtual network - If there are a finite number of servers required by multiple "spoke" networks (such as database servers), these can be placed in a single "spoke" virtual network. Peering relationships can be established between the shared services virtual network and any "spoke" network that requires access to the shared services.
    • Place shared services in the "hub" virtual network - The simplest solution is to place all shared services on servers in the "hub" network. Network traffic from all "spoke" networks is routed to the "hub" virtual network by design, so this doesn't add any complexity to the topology.

    Final Thoughts

    Peer-to-peer transitive routing is currently not available by default in Azure virtual networking but can be implemented using a network virtual appliance and custom routing rules. This is a viable option for network topologies, but there may be simpler alternatives. In Part Two of this series, I will go through the process of establishing peer-to-peer transitive routing in a "hub and spoke" network topology. Stay tuned.

     

     

    INE's Microsoft Azure Learning Path is available now! Experience the complete journey to Azure Certification. Use your All Access Pass to get started today. 

    {{cta('0aeb8e9d-8cca-4219-8ed6-4f16dfbf7074')}}

     

     

    © 2024 INE. All Rights Reserved. All logos, trademarks and registered trademarks are the property of their respective owners.
    instagram Logofacebook Logotwitter Logolinkedin Logoyoutube Logo