Azure vs OCI Load Balancers & Traffic Routing: What Actually Matters
Load balancers are one of those things everyone uses but nobody really thinks about until something breaks or the bill arrives. I’ve been working with both Azure and OCI load balancers for different projects, and the approaches are different enough that it’s worth getting into the weeds.
The Product Lineup
Let’s start with what you’re actually choosing from, because both clouds have multiple load balancing services and the naming isn’t always helpful.
Azure gives you:
- Azure Load Balancer (L4, regional)
- Azure Application Gateway (L7, regional, includes WAF)
- Azure Front Door (L7, global, CDN + routing)
- Traffic Manager (DNS-based global routing)
- Cross-region Load Balancer (preview/GA dep…
Azure vs OCI Load Balancers & Traffic Routing: What Actually Matters
Load balancers are one of those things everyone uses but nobody really thinks about until something breaks or the bill arrives. I’ve been working with both Azure and OCI load balancers for different projects, and the approaches are different enough that it’s worth getting into the weeds.
The Product Lineup
Let’s start with what you’re actually choosing from, because both clouds have multiple load balancing services and the naming isn’t always helpful.
Azure gives you:
- Azure Load Balancer (L4, regional)
- Azure Application Gateway (L7, regional, includes WAF)
- Azure Front Door (L7, global, CDN + routing)
- Traffic Manager (DNS-based global routing)
- Cross-region Load Balancer (preview/GA depending on when you read this)
OCI gives you:
- Load Balancer (L4 and L7 combined, regional)
- Network Load Balancer (L4, ultra-low latency)
- Traffic Management Steering Policies (DNS-based routing)
Right off the bat, OCI’s lineup is simpler. One load balancer does both L4 and L7, which is conceptually cleaner. Azure split these because they evolved separately, and while that gives you more targeted options, it also means more decision paralysis.
Layer 4: The Basic Building Block
Azure Load Balancer
Azure’s L4 load balancer is solid and boring, which is what you want. You create a load balancer (Standard SKU is the only one that matters anymore), add a frontend IP configuration, create a backend pool, define health probes, and set up load balancing rules. It supports both inbound and outbound scenarios.
The Standard SKU is zone-redundant by default, which is great. It also supports multiple frontend IPs on a single load balancer, useful for hosting multiple services. HA Ports is a feature that lets you load balance all ports with a single rule, which sounds niche until you need it for NVAs.
Health probes are straightforward: HTTP, HTTPS, or TCP. You set an interval and threshold, and if a backend fails, it’s taken out of rotation. The one thing that trips people up is the default 15-second timeout - sometimes you need to tune this for slow-starting applications.
Pricing is per rule plus data processed. The data processing charge is $0.005/GB, which adds up but isn’t terrible. Where it gets expensive is when you need multiple load balancers for isolation or different configurations.
OCI Load Balancer (L4 mode)
OCI’s Load Balancer can operate in L4 mode, but honestly, if you want pure L4, you probably want the Network Load Balancer instead. It’s newer, faster, and purpose-built.
The Network Load Balancer is OCI’s answer to AWS’s Network Load Balancer, and it’s genuinely impressive from a performance standpoint. We’re talking microsecond-level latency, preservation of source IP by default, and the ability to handle millions of requests per second per instance.
Unlike Azure’s load balancer, the Network Load Balancer is truly transparent - clients see the actual backend IPs. This is huge for protocols that care about source IP, and you don’t need to mess with X-Forwarded-For headers or connection draining strategies.
Configuration is simpler than Azure: backend sets, listeners, and health checks. That’s basically it. The health checks are TCP or HTTP/HTTPS, similar to Azure.
Pricing is per hour plus data processed (called “bandwidth” in OCI pricing docs). It’s generally cheaper than Azure, especially at scale. An OCI Network Load Balancer processes data at $0.008/GB versus Azure’s $0.005/GB, but the hourly cost is lower, so it depends on your traffic patterns.
Winner for L4?
For most use cases, they’re comparable. Azure’s Load Balancer is more mature with more features and better documentation. OCI’s Network Load Balancer is faster and cheaper at scale. If you’re doing high-throughput, latency-sensitive work, OCI wins. For everything else, it’s a wash.
Layer 7: Where It Gets Interesting
Azure Application Gateway
Application Gateway is Azure’s L7 load balancer, and it’s feature-packed. You get URL-based routing, host-based routing, SSL termination, end-to-end SSL, cookie-based session affinity, WebSocket support, custom health probes, and integration with Azure Web Application Firewall.
The architecture is straightforward: you create an Application Gateway with a frontend IP, add backend pools, create HTTP settings (which define how Application Gateway talks to backends), and then set up routing rules that tie listeners to backend pools.
URL-based routing lets you send /api/* to one backend pool and /images/* to another, which is genuinely useful for microservices. Host-based routing lets you serve multiple domains from the same gateway. The WAF integration is solid - you get OWASP Core Rule Set protection with minimal config.
Here’s what people don’t tell you: Application Gateway is slow to provision. We’re talking 20-30 minutes for initial deployment. Updates are also slow. If you’re doing infrastructure-as-code with frequent rebuilds, this gets old fast.
Autoscaling exists but works differently than you’d expect. Application Gateway v2 (the current version) scales based on compute units, which are a function of connection count, throughput, and compute. You set min and max instance counts, and Azure handles the scaling. It works, but it’s not as responsive as I’d like - expect 3-5 minutes for scale-out operations.
The pricing model is complicated: you pay per hour for the gateway itself, per compute unit hour, and for data processing. A small gateway with moderate traffic might cost $200-400/month. A large gateway with autoscaling and WAF can easily hit $2000+/month. Read the pricing page carefully.
OCI Load Balancer (L7 mode)
OCI’s Load Balancer handles both L4 and L7 traffic, and you choose when you configure your listeners. For L7, it supports path routing, hostname routing, SSL termination, session persistence, and health checks.
What’s nice: you get both capabilities in one service. What’s less nice: the configuration model is less intuitive than Application Gateway if you’re used to Azure.
You define backend sets (groups of backends), listeners (frontends that accept traffic), and then routing policies within listeners that determine where traffic goes. Path-based routing uses “route rules” within the listener configuration. It works, but the mental model took me a minute to grasp.
The Web Application Firewall is a separate service in OCI, not integrated into the load balancer like Azure. You create a WAF policy and attach it to the load balancer. This separation is cleaner architecturally but means more moving parts.
Performance is good - I haven’t run into latency issues. Provisioning is faster than Azure Application Gateway, usually 5-10 minutes. Updates are also faster.
Pricing is simpler: you pay per hour based on the shape (bandwidth capacity) you choose, plus data processed. A 10 Mbps load balancer is around $20-30/month, a 400 Mbps is around $150/month, plus the $0.008/GB for processed data. For equivalent capacity, it’s often cheaper than Application Gateway.
Winner for L7?
Application Gateway has more features and better documentation, especially around complex routing scenarios. If you need tight WAF integration or you’re already deep in Azure, it’s the obvious choice.
OCI’s Load Balancer is simpler, faster to provision, and cheaper. If you don’t need every feature and want something that just works, it’s compelling.
Global Load Balancing and Traffic Management
This is where the platforms diverge significantly.
Azure Front Door
Front Door is Azure’s global L7 load balancer with CDN capabilities. You configure backends across multiple regions, and Front Door routes users to the best backend based on latency, health, and your routing preferences.
The feature set is extensive: URL routing, session affinity, custom domains with SSL, caching, DDoS protection, and WAF. You can do A/B testing by splitting traffic percentages. The Edge locations are Azure’s CDN POPs, so global coverage is excellent.
The killer feature is the intelligent routing. Front Door constantly monitors backend health and latency from its edge locations. If a backend goes down or gets slow, traffic automatically fails over. This happens in seconds, not minutes. For global applications where downtime is expensive, this is huge.
Caching works well if your content is cacheable. You define caching rules, and Front Door serves cached content from edge locations. This reduces load on your backends and improves latency for end users.
The gotchas: Front Door is expensive. You pay per routing rule, per custom domain, per GB of data transfer out, and per request. A moderate-traffic site can easily hit $500-1000/month. High-traffic sites pay more. The cost scales roughly linearly with traffic, which can be painful.
Also, Front Door’s configuration model is complex. You’ve got front-end hosts, backend pools, routing rules, and rules engines. The learning curve is steep.
Azure Traffic Manager
Traffic Manager is DNS-based routing, not a true load balancer. You create a Traffic Manager profile, add endpoints (which can be Azure resources or external IPs), and choose a routing method: priority, weighted, performance, geographic, or multivalue.
Performance-based routing sends users to the closest endpoint based on DNS resolution latency. Geographic routing sends users to specific endpoints based on their location. Weighted routing lets you split traffic for A/B testing or gradual rollouts.
Traffic Manager is cheap - $0.54/million DNS queries plus $0.36 per monitored endpoint. For most workloads, this is under $50/month.
The limitation is that it’s DNS-based, so you’re subject to DNS TTL and caching. Failover isn’t instant - it depends on clients respecting TTL, which not all do. For critical workloads, you probably want Front Door. For cost-sensitive scenarios where 30-60 second failover is acceptable, Traffic Manager works fine.
OCI Traffic Management Steering Policies
OCI’s global routing is entirely DNS-based, similar to Traffic Manager. You create a steering policy and attach it to a DNS zone. The policies support failover, load balancing, geolocation steering, and ASN steering (routing based on autonomous system number, which is niche but cool).
Health checks are built in - you define endpoints and health check configurations, and OCI removes unhealthy endpoints from DNS responses.
The interface is less polished than Azure’s, and the documentation is thinner. But it works, and it’s included in your OCI subscription without per-query charges beyond standard DNS pricing.
Winner for Global Routing?
Front Door is the most powerful option by far, but you pay for it. If you need intelligent routing with sub-second failover and don’t mind the cost, it’s unmatched.
Traffic Manager and OCI’s steering policies are comparable - both DNS-based, both cheap, both limited by DNS behavior. Choose based on which cloud you’re already using.
The Stuff That Actually Matters in Production
SSL/TLS Termination
Both platforms handle this well. Azure Application Gateway and Front Door support SNI (Server Name Indication) so you can host multiple SSL sites on one load balancer. You can use Azure Key Vault for certificate storage, which is convenient.
OCI Load Balancer also supports SNI and lets you store certificates directly in the load balancer or use OCI Vault. The certificate renewal story is less automated than Azure, though. Azure has better integration with Let’s Encrypt and automatic cert renewal.
Session Persistence
Azure Application Gateway supports cookie-based session affinity. It inserts a cookie and ensures subsequent requests from the same client go to the same backend. This works fine but breaks if your backends scale down and that specific instance disappears.
OCI Load Balancer supports application cookie persistence (you specify the cookie name) or load balancer-generated cookie persistence. The flexibility is nice if your app already uses session cookies.
For serious session management, though, you should be using a distributed cache or database anyway. Sticky sessions at the load balancer are a band-aid.
Connection Draining
Azure Application Gateway calls this “connection drain timeout.” When you remove a backend or it fails health checks, existing connections get a grace period to finish before being forcibly closed. Default is 30 seconds, max is 500 seconds.
OCI Load Balancer has the same concept, called “connection drain timeout.” Default is 300 seconds, max is 3600 seconds (one hour, which seems excessive but okay).
Both work as expected. Set this based on how long your backend requests typically take, with a safety margin.
Observability
Azure wins here, not even close. Application Gateway and Front Door integrate beautifully with Azure Monitor. You get metrics out of the box - request count, response time, healthy/unhealthy host count, backend response time, and more. You can set up alerts, create dashboards, and query logs in Log Analytics.
Diagnostic logs give you detailed request/response information, including headers, which is invaluable for debugging. Front Door also logs cache hit rates and origin latency.
OCI Load Balancer integrates with OCI Monitoring and Logging services, but it’s more barebones. You get basic metrics - bandwidth, connections, health status - but not as much detail. Access logs exist but require manual parsing. There’s no equivalent to Azure’s Application Insights integration.
If you need deep visibility into traffic patterns and performance, Azure’s tooling is significantly better.
Reliability and SLAs
Azure Application Gateway Standard_v2 has a 99.95% SLA. Front Door has 99.99%. Traffic Manager has 99.99%.
OCI Load Balancer (both types) has a 99.95% SLA.
In practice, I’ve had more issues with Azure Application Gateway than OCI Load Balancer, but that’s anecdotal and probably a function of traffic volume and configuration complexity.
Real-World Decision Points
Choose Azure Load Balancer if:
- You’re already in Azure and need basic L4 load balancing
- You need HA Ports for network virtual appliances
- You want zone redundancy without thinking about it
Choose OCI Network Load Balancer if:
- You need ultra-low latency (microseconds matter)
- You’re handling very high connection rates
- You want source IP preservation without X-Forwarded-For hacks
- You’re cost-conscious and pushing serious traffic
Choose Azure Application Gateway if:
- You need sophisticated L7 routing (URL paths, hostnames, headers)
- WAF integration is important
- You’re comfortable with Azure pricing and the provisioning time
- You want excellent observability and monitoring
Choose OCI Load Balancer if:
- You want one service that does both L4 and L7
- You prefer simpler pricing and faster provisioning
- You don’t need every possible feature, just the core ones done well
- Budget is tight
Choose Azure Front Door if:
- You need global distribution with intelligent routing
- Sub-second failover matters for your business
- You want integrated CDN capabilities
- You can afford premium pricing
Choose DNS-based routing (Traffic Manager or OCI) if:
- You need basic global routing
- 30-60 second failover is acceptable
- You want to minimize cost
- Your traffic patterns are predictable
What I Actually Use
For internal services in Azure, I use Azure Load Balancer. It’s simple and cheap enough that I don’t overthink it.
For public-facing APIs in Azure, I use Application Gateway with WAF. The cost hurts, but the security and routing features justify it. I’ve learned to automate the slow provisioning times with parallel deployments.
For global applications, I use Front Door despite the cost. The performance and failover capabilities are worth it when downtime directly impacts revenue.
For OCI, I use Network Load Balancer for backend services and the regular Load Balancer for public-facing apps. The pricing is friendly enough that I don’t worry about optimizing too hard.
The honest truth? Both platforms have good load balancing options. Azure has more features and better observability. OCI is simpler and cheaper. Pick based on what matters more for your specific use case, not based on theoretical maximums you’ll never hit.
And for the love of all that’s holy, test your failover scenarios before you need them. I’ve seen too many “highly available” architectures fall apart because nobody actually tested what happens when a backend dies.