Enterprise & Multi-site
When you're running 50, 200, or 500 sites across multiple countries, the network isn't a utility. It's the thing that determines whether the business operates or doesn't.
Multi-site enterprise networking is a discipline that vendor marketing has thoroughly polluted. Every SD-WAN pitch deck shows the same diagram: a clean topology with a centralized controller, identical branch appliances, and a cloud that magically solves everything. The demo works perfectly. The PoC goes well. Then you try to deploy across 15 countries and discover that the real world doesn't match the slide.
We've designed and deployed multi-site networks across six continents. The technology is the straightforward part. The hard problems are operational: underlay diversity, regulatory variation, local ISP capability, and the relentless grind of keeping 200 sites consistently configured, monitored, and documented when each one has its own quirks. If you're an IT director staring at a spreadsheet of sites wondering how to bring coherence to the chaos, this page is for you.
SD-WAN: what the vendor demo doesn't show you
SD-WAN is a genuine architectural advance. Application-aware routing, centralized policy management, transport independence. These are real capabilities that solve real problems. But the gap between the vendor pitch and a working multi-country deployment is enormous, and it's worth being specific about where the problems live.
Underlay quality variation. SD-WAN abstracts the transport layer, which is powerful. But "abstract" doesn't mean "fix." If the underlying circuit at your branch in Southeast Asia is a 20 Mbps ADSL line with 150ms of jitter during business hours, no SD-WAN overlay is going to make that behave like the 1 Gbps DIA at your headquarters. The SD-WAN controller will route around it when it can, apply forward error correction when it can't, and at some point the physics of that circuit will win. The vendor will tell you SD-WAN "optimizes" the underlay. What it actually does is make the best of whatever you give it, which is a meaningful distinction.
"Business broadband" means radically different things in different markets. In Northern Europe, you might get 500 Mbps symmetric fiber to a small branch office with five nines availability and a four-hour SLA. In parts of South America, "business grade" could mean a best-effort DSL line that drops out during heavy rain. In the Middle East, regulatory constraints may force you through specific gateway carriers with mandated inspection. In Southeast Asia, you might find excellent urban fiber but the moment you leave the city center, you're on microwave or cellular. SD-WAN doesn't fix underlay problems. It surfaces them.
Multi-country regulatory complexity. Data sovereignty, lawful intercept requirements, encryption restrictions, local carrier mandates. Some countries require that certain traffic types stay within national borders. Others mandate specific VPN protocols or prohibit others. An SD-WAN architecture that works perfectly across the European Union may be illegal in parts of the Middle East without modification. Your SD-WAN vendor's compliance documentation probably covers the major markets. It probably doesn't cover everywhere you have a branch.
The single-vendor trap. Most SD-WAN platforms work best (and some only work at all) when every site runs the same vendor's appliance. Which means you're now operationally dependent on a single vendor's hardware supply chain, software quality, and roadmap decisions. We've watched organizations discover this during the global chip shortage when their SD-WAN vendor couldn't deliver appliances for six months. The sites that still needed connecting didn't care about semiconductor supply chains. They needed to work.
SD-WAN is a transport architecture, not a network strategy. The organizations that get it right treat it as one component (an important one) inside a broader design that accounts for underlay diversity, vendor risk, and operational reality.
MPLS migration: it's never just "turn it off"
Every multi-site network migration conversation includes the phrase "we want to move off MPLS." The drivers are real: MPLS circuits are expensive, lead times are long, and the market has moved toward internet-based transport. But the migration is rarely as clean as decommissioning one thing and turning on another.
MPLS networks accumulate architectural decisions over years. BGP peering configurations, route preferences, traffic engineering through MPLS-TE or RSVP, QoS markings that downstream devices depend on, and often private IP addressing schemes that were designed around the MPLS topology. Ripping MPLS out and replacing it with SD-WAN means re-addressing all of those dependencies, not just swapping the transport.
The BGP implications alone can be significant. Many multi-site MPLS networks run iBGP between sites with the provider's PE routers acting as route reflectors. Moving to SD-WAN means either re-implementing that routing architecture on the overlay (which not all SD-WAN platforms handle gracefully) or fundamentally redesigning how sites learn about each other's networks. If you have complex route filtering, communities, or local preference tuning in your BGP config, expect that to take time to replicate.
We've done enough of these migrations to know that the right approach is almost always hybrid: run MPLS and SD-WAN in parallel for a transition period, migrate sites in phases starting with the least critical, and keep MPLS as the backbone for sites with strict latency or jitter requirements until the internet-based transport proves itself. The CFO wants the MPLS bill gone immediately. The network engineer knows that cutting over 200 sites simultaneously is how you end up on a conference bridge at 3am explaining why the ERP system is unreachable.
Application-aware routing and why it actually matters
Traditional routing makes forwarding decisions based on destination IP. Application-aware routing makes forwarding decisions based on what the traffic actually is and what it needs. This is genuinely useful in a multi-site environment, and it's worth understanding why at a protocol level rather than just accepting the marketing claim.
Consider a site with two WAN links: a 100 Mbps DIA circuit and a 50 Mbps broadband connection. Traditional routing sends everything down the primary and fails over to the secondary when the primary dies. Application-aware routing can identify that the voice traffic needs low jitter and send it over the DIA, while bulk file transfers and Windows updates go over the broadband. It can monitor both paths in real time (measuring latency, loss, and jitter) and reroute traffic when a path degrades, not when it fails completely. The difference between "degraded" and "failed" is where the value lives. A circuit with 2% packet loss is still technically up, but your video calls sound terrible.
Deep packet inspection (DPI) engines in modern SD-WAN platforms can classify thousands of applications, including SaaS traffic. This matters because SaaS has fundamentally changed enterprise traffic patterns. When your ERP, CRM, collaboration suite, and file storage all live in the cloud, backhauling everything to headquarters for inspection makes no sense. Local internet breakout for trusted SaaS traffic, with direct-to-cloud routing, reduces latency and offloads your centralized security stack. But it also means every branch is now an internet egress point, which has security implications we'll get to.
WiFi as infrastructure, not an afterthought
In most multi-site enterprises, the WiFi network carries more devices and more traffic than the wired network. Yet it's frequently designed as an afterthought. A few access points thrown at the ceiling during the office fit-out, configured once, and never tuned again. This works until it doesn't, and in a multi-site environment, the inconsistency compounds fast.
The problem isn't the access points. Modern enterprise APs are capable hardware. The problem is design, density planning, and ongoing management at scale.
RF design per site. Every building is different. Wall materials, floor plans, ceiling heights, interference sources. A single-floor open-plan office has completely different RF characteristics from a multi-story concrete building with server rooms and warehousing. Cookie-cutter AP placement based on square footage produces dead zones in some rooms and co-channel interference in others. Each site needs an RF plan, either from a predictive modeling tool or an on-site survey, and those plans need to account for real-world client density, not theoretical maximums.
Authentication and segmentation. Enterprise WiFi means 802.1X authentication against RADIUS, with dynamic VLAN assignment based on user role. Corporate laptops on one VLAN, BYOD on another, guest traffic on an isolated segment with rate limiting and no access to internal resources. This is straightforward for one site. Across 200 sites, the RADIUS infrastructure, certificate management, and VLAN consistency become operational challenges that need automation and monitoring, not manual configuration.
Cloud-managed versus on-premise controllers. Cloud-managed WiFi platforms have gotten good enough that on-premise controllers are increasingly hard to justify for most enterprise deployments. Centralized firmware management, configuration templates, RF optimization, and client analytics across hundreds of sites from a single dashboard. The trade-off is cloud dependency. If the management plane is unreachable, APs keep running on their last known configuration but you lose visibility and control. For most enterprises, that's an acceptable trade. For environments where the WiFi is mission-critical (healthcare, manufacturing floors, retail POS), you need to think harder about that failure mode.
Zero-trust and network segmentation at 200 sites
Network segmentation in a multi-site environment isn't optional. It's the architectural foundation that everything else depends on. And it's become significantly more complex than VLANs and ACLs.
Traditional segmentation uses VLANs at Layer 2 and firewall rules at Layer 3 to isolate traffic. This works, but it's brittle at scale. A 200-site enterprise with 10 VLANs per site is managing 2,000 VLAN definitions and associated firewall rules. Changes propagate slowly. Misconfigurations are hard to detect. And the model fundamentally assumes that everything inside a VLAN is trusted, which hasn't been true for years.
VXLAN (Virtual Extensible LAN) addresses some of the scalability problems by extending Layer 2 segments over Layer 3 infrastructure, giving you a 24-bit segment ID space instead of the 12-bit VLAN limit. This is particularly useful when you need consistent microsegmentation policies across sites without being constrained by the 4,094 VLAN ceiling. But VXLAN adds overlay complexity and requires careful design of the underlay multicast or ingress replication architecture to work reliably.
Zero-trust networking takes segmentation further. Instead of trusting devices because they're on the right VLAN, every access request is authenticated and authorized independently. 802.1X provides the initial network admission (proving the device is who it claims to be via RADIUS authentication and certificate validation) but zero-trust extends that logic to every subsequent connection. A laptop on the corporate VLAN still proves its identity before accessing the ERP server. If the device posture changes (security agent disabled, certificate expired, anomalous behavior detected), access is revoked in real time.
Implementing this across 200 sites requires a few things that most organizations underestimate:
- A robust PKI infrastructure. 802.1X with EAP-TLS means managing certificates for every device that touches the network. Certificate lifecycle management (issuance, renewal, revocation) at scale is a project in itself. Let the certificates expire and your Monday morning starts with 3,000 devices unable to authenticate.
- Consistent policy enforcement. The firewall rules at site 47 need to match the policy intent defined centrally. Configuration drift is inevitable without automation. We've audited multi-site networks where 30% of the sites had deviated from the standard configuration, usually because someone made a "temporary" local change that became permanent.
- East-west traffic visibility. Most enterprise firewalls are deployed at the network perimeter. But in a segmented environment, the lateral traffic between segments is where compromised devices move. Without visibility into east-west flows (between VLANs, between sites) you're monitoring the front door while ignoring the internal corridors.
The operational challenge: consistency at scale
Designing a multi-site network is a finite engineering problem. Operating it is an infinite operational one. And the operational challenge is where most multi-site networks fall apart, not because the design was wrong, but because the operational model couldn't maintain it.
Consider what "consistent configuration" means at 200 sites. Every switch needs the same VLAN definitions, the same spanning-tree priorities, the same SNMP communities, the same NTP sources, the same logging destinations. Every firewall needs the same rule base, updated simultaneously when policy changes. Every AP needs the same SSID configuration, the same RADIUS server hierarchy, the same RF power settings (adjusted per site). Every SD-WAN appliance needs the same application policies, the same SLA thresholds, the same breakout rules.
Manual configuration doesn't scale past about 20 sites. After that, you need infrastructure as code: templated configurations, automated deployment, drift detection, and compliance reporting. The tooling exists. From orchestration platforms built into SD-WAN controllers to standalone network automation frameworks. The challenge isn't the tools. It's committing to the discipline of treating network configuration as code, version-controlling it, testing changes in a staging environment, and deploying them through a pipeline rather than SSH-ing into devices.
Monitoring that generates signal, not noise. A 200-site network will produce thousands of alerts per day if you monitor everything at default thresholds. Interface utilization warnings, SNMP traps, syslog messages, SD-WAN path quality alerts. Most of it is noise. The operational challenge is tuning the monitoring to surface the things that actually require human attention while suppressing the things that don't. A branch's backup link flapping every Tuesday at 2am because the ISP runs maintenance is noise. The same link flapping at noon on a Wednesday is signal. Your monitoring needs enough context to know the difference.
Change management at scale. When you push a firewall rule change to 200 sites simultaneously and something goes wrong, you have 200 simultaneous incidents. Staged rollouts (canary deployments to a subset of sites, soak periods, automated rollback on failure detection) are not luxury practices. They're survival practices. The organizations that operate large networks reliably treat every configuration change as a deployment, not an edit.
The difference between a multi-site network that works and one that doesn't is rarely the hardware or the architecture. It's whether the organization treats network operations as an engineering discipline or an administrative task.
What we do in this sector
We work with multi-site enterprises on the problems described above. Not in the abstract. At the level of BGP route policy, VXLAN fabric design, RADIUS server architecture, and SD-WAN platform selection against real-world underlay conditions. The specific work depends on where you are in the lifecycle:
Network architecture and design. For organizations building a new multi-site network or fundamentally redesigning an existing one. We produce detailed designs that cover topology, addressing, routing, segmentation, security, WiFi, and operational monitoring. Vendor-agnostic: we evaluate every viable platform against your specific requirements, not our preferred partner list.
MPLS-to-SD-WAN migration. Hybrid architectures, phased migration plans, BGP transition strategy, and parallel running periods. We've done enough of these to know where the bodies are buried and how to avoid creating new ones.
Operational maturity assessment. For organizations that have the network but can't operate it consistently. We audit the configuration management, monitoring, change management, and documentation practices and produce a practical improvement roadmap. Not theory. Specific tools, processes, and organizational changes that will make the network manageable.
Fractional CTO / Virtual IT Director. Many mid-market multi-site enterprises don't have (and don't need) a full-time network architect or CTO. They need someone who understands this domain at depth, available two to four days a month to set direction, evaluate vendors, manage the relationship with the MSP, and represent the technology function at board level.
Managing a multi-site network that's outgrown its design?
We've worked across enough multi-country deployments to know what actually holds up at scale. Tell us what you're dealing with.
Start a conversation