Virtual network peering is one of those Azure costs that looks almost negligible when you first encounter it. A penny per gigabyte in each direction for same-region peering. At those rates, it barely registers against the cost of your compute, storage, or firewall. It's rounding error. It's pocket change.
Until it isn't.
We see this pattern repeatedly across the Azure environments we review. VNet peering starts as an invisible cost, a fraction of a percent of the total bill. Then the architecture grows. More spokes get added. Cross-region connectivity comes into play. Shared services generate constant traffic through the hub. And suddenly there's a networking line item running into hundreds of pounds a month that nobody anticipated and nobody is monitoring.
The problem isn't the per-GB rate. The problem is what hub-spoke architecture does to your traffic volumes.
How Peering Pricing Works
The pricing model for VNet peering is straightforward on paper. For same-region peering, you pay roughly a penny per gigabyte for inbound traffic and a penny per gigabyte for outbound traffic. So one gigabyte flowing from Spoke A to the Hub costs about two pence in total, a penny charged on the outbound side and a penny on the inbound side.
Cross-region peering is where the numbers shift. If your peered VNets are in different Azure regions, the rate jumps to roughly three and a half pence per gigabyte in each direction. That's more than three times the same-region rate, and it applies to every gigabyte that crosses the peering connection.
These are small numbers in isolation. But hub-spoke architectures are specifically designed to funnel traffic through peering connections, which means the volumes add up in ways that per-GB pricing makes easy to underestimate.
The Hub-Spoke Traffic Multiplier
Here's where it gets expensive. In a standard hub-spoke architecture, the hub VNet hosts your centralised services: the firewall, the VPN or ExpressRoute gateway, shared DNS, Active Directory Domain Services, maybe a central file share or management tooling. Every spoke that needs to reach these services (which is every spoke, by definition) sends traffic through a peering connection.
That means peering isn't just handling occasional cross-VNet communication. It's handling all of your centralised traffic flows, all the time. DNS queries from every spoke. Authentication traffic from every domain-joined machine. Backup data flowing to a central Recovery Services vault. Monitoring telemetry from every spoke heading to a central Log Analytics workspace. Management traffic from Bastion or jump boxes in the hub.
Let's do the maths on a moderately sized environment. Say you have ten spokes, each generating around 500GB per month of traffic that crosses a peering connection to reach the hub. That's 5TB of outbound peering traffic from the spokes, plus 5TB of inbound peering traffic at the hub. At a penny per gigabyte each way, same-region, that's roughly a hundred pounds a month on peering alone.
Now add the return traffic (responses flowing back from the hub to the spokes) and you're looking at somewhere between a hundred and two hundred pounds a month. For same-region peering. Before any cross-region traffic enters the picture.
The Hidden Double-Hop
There's a subtlety in hub-spoke peering costs that catches a lot of people out: spoke-to-spoke traffic.
If Spoke A needs to communicate with Spoke B, and both are peered with the hub, that traffic traverses two peering connections. It goes from Spoke A to the Hub (one peering, billed both sides), then from the Hub to Spoke B (another peering, billed both sides). One logical connection between two spokes results in four peering charges.
This double-hop pattern is particularly expensive when you have services in one spoke that are consumed by workloads in other spokes. A shared database in Spoke A being queried by applications in Spokes B, C, and D means every query and every result set is traversing two peerings. The costs compound quickly, and they're rarely visible in cost analysis because peering charges aren't tagged or attributed to specific workloads. They show up as a flat networking cost on the VNet resource.
Cross-Region: Where It Really Multiplies
Multi-region architectures introduce global VNet peering, and the cost increase is significant. At roughly three and a half pence per gigabyte each way instead of a penny, the same traffic volumes that cost a hundred pounds in a single region can cost three hundred and fifty pounds when they cross regions.
If you're running a disaster recovery configuration with active replication between regions, or a multi-region application where services in UK South need to communicate with services in West Europe, every gigabyte of that traffic is charged at the cross-region rate. Database replication, log shipping, configuration sync, health probes. It all traverses global peering and it all incurs the higher rate.
We've seen environments where cross-region peering costs exceed the same-region peering costs by a factor of five or more, simply because nobody thought to account for how much background traffic flows between regions in a properly configured multi-region deployment.
Common Patterns That Inflate Peering Costs
Some architectural decisions create disproportionate peering traffic without anyone realising. These are the patterns we see most frequently:
Centralised DNS forwarding. Every DNS query from every spoke gets forwarded to DNS servers in the hub. That's thousands of small packets per minute, all traversing peering. Individually tiny, collectively significant across dozens of spokes and thousands of VMs.
Centralised backup. Azure Backup traffic flowing from spoke VMs to a central Recovery Services vault in the hub subscription means every backup window generates substantial peering traffic. Full backups can run into hundreds of gigabytes per spoke.
Monitoring and logging. If your Log Analytics workspace sits in the hub and your diagnostic settings are shipping logs from every resource in every spoke, that's a continuous stream of telemetry data flowing through peering connections. Verbose logging configurations make this worse.
Dev/test environments in separate VNets. Development and staging environments peered to the hub for access to shared services generate peering traffic that's often overlooked in cost planning. These environments might be smaller, but they're typically chattier. Developers running builds, deployments, and tests generate surprisingly high traffic volumes.
File shares in the hub. Centralised Azure Files or file server VMs in the hub that are accessed from workloads across multiple spokes create constant read/write traffic through peering connections.
Optimisation Options
The goal isn't to eliminate peering. It's fundamental to hub-spoke architecture. The goal is to reduce unnecessary traffic that flows through peering connections when it doesn't need to.
Service endpoints and Private Link can bypass peering entirely for traffic to Azure PaaS services. If your spokes are accessing Azure Storage, Azure SQL, or Key Vault through the hub because that's how the network routes are configured, Private Link endpoints deployed directly in the spoke VNets eliminate that peering traffic completely. The spoke talks directly to the PaaS service over the Microsoft backbone without touching the hub.
Local DNS resolution in spokes reduces the constant stream of DNS forwarding traffic. Azure DNS Private Resolver deployed in each spoke (or a subset of spokes) can handle resolution locally, only forwarding to the hub for domains that genuinely require centralised resolution.
Evaluate what actually needs the firewall. Not all traffic needs to route through the hub firewall. Network Security Groups can handle basic allow/deny rules at the subnet level without any traffic leaving the spoke VNet. If you're routing monitoring traffic or backup traffic through the firewall purely because the default route sends everything to the hub, consider whether UDR exceptions for specific traffic patterns could reduce your peering volumes.
Monitor your peering metrics. Azure Monitor exposes peering traffic volumes at the VNet level. Check the "Data transferred in peering" and "Data transferred out peering" metrics for each VNet. If specific spokes are generating unexpectedly high peering traffic, investigate what's driving it before assuming the architecture requires it.
Quick Wins
If you haven't looked at your peering costs before, start here:
-
Check Azure Monitor for peering traffic volumes on each VNet. Sort by traffic volume and identify your chattiest spokes. The numbers might surprise you.
-
Identify spoke-to-spoke traffic patterns. If two spokes communicate heavily with each other through the hub, consider whether direct VNet peering between those spokes (bypassing the hub) would be cheaper and faster.
-
Evaluate Private Link for PaaS access. If spoke workloads are reaching Azure Storage, SQL, or other PaaS services via the hub, Private Link endpoints in the spokes can eliminate that peering traffic entirely.
-
Review cross-region traffic. If you're running global VNet peering, understand exactly what traffic is flowing between regions and whether all of it is necessary. Replication schedules, sync frequencies, and logging verbosity are all tuneable.
-
Audit DNS forwarding volumes. If every spoke forwards every DNS query to the hub, deploying local resolvers in high-traffic spokes can meaningfully reduce peering costs.
Peering is one of those costs that's easy to dismiss because the per-GB rate looks trivial. But architecture decisions compound, traffic volumes grow, and a penny per gigabyte across tens of terabytes of monthly traffic stops being trivial quite quickly. The first step is simply looking at the numbers. Most organisations never have.
Wondering how much your VNet peering is actually costing you? Book a free Azure cost assessment and we'll map your traffic patterns and identify where networking costs can be reduced.