Broadcast networking is not enterprise networking with higher stakes. It is a fundamentally different discipline. The tolerances are different, the failure modes are different, and the people operating the equipment are different. Designing connectivity for a live broadcast requires understanding all three of those realities simultaneously, and most network engineers (even very good ones) have never had to think about any of them.

We have spent the better part of fifteen years designing, deploying, and operating critical connectivity for broadcast and media organizations. Live news, sports, entertainment, corporate events that are functionally live broadcasts even if nobody calls them that. The common thread across all of it: the network either works perfectly or the production fails. There is no graceful degradation in live television. There is working, and there is a black screen.

Why broadcast is different

In enterprise networking, you design for averages. A 100 Mbps circuit that delivers 95 Mbps most of the time is a good circuit. A 20ms latency path that occasionally spikes to 80ms is acceptable for almost every business application. If a VPN tunnel flaps for three seconds and reconverges, nobody notices.

Broadcast tolerates none of that. A live HD video stream at 1080p50 encoded in H.265 needs a consistent 8-15 Mbps. Not as an average, as a floor. If bandwidth dips below the encoding bitrate for even two seconds, you get macro-blocking, frame drops, or a complete loss of the contribution feed. Your viewers see it. Your production gallery sees it. The commissioner sees it. And the conversation that follows is not a polite trouble ticket.

Latency is the other constraint. For a live news two-way (anchor in the studio, reporter in the field) end-to-end glass-to-glass latency above 500ms makes the conversation awkward. Above 800ms it becomes unusable. The presenter and the reporter start talking over each other, the director cuts away, and the segment dies. That latency budget includes encoding, network transit, decoding, and any processing in the production chain. The network's share of that budget is typically 150-300ms, which means you cannot afford variable delay. A path that delivers 100ms most of the time but occasionally spikes to 400ms is worse than a path that consistently delivers 250ms.

Then there is the operational reality. The person plugging in the encoder at the remote end is usually not a network engineer. They are a camera operator, a sound technician, or a production runner who was handed a pelican case and told "set this up and make sure the light goes green." The system has to work when operated by someone whose primary expertise is framing a shot, not configuring a network. If your solution requires SSH access to troubleshoot, you have already failed.

The satellite-to-IP transition

For decades, live broadcast contribution meant one thing: a satellite truck. An SNG (Satellite News Gathering) vehicle, a trained operator, a satellite booking window, and a direct uplink to a transponder. It was expensive, it required specialist personnel, and booking windows were inflexible. But the signal either made it to the satellite or it didn't. There was very little in between, and that binary simplicity was its own form of reliability.

The shift to IP-based contribution started gradually in the early 2010s and accelerated hard after 2018. Three things converged. Cellular networks reached sufficient density and bandwidth in urban areas. Encoding hardware became small enough and efficient enough to fit in a backpack. And SRT gave broadcasters a transport protocol that could actually cope with the realities of public internet delivery.

Today, the majority of day-to-day live contribution (news hits, sports sideline reporting, remote interviews, on-location packages) runs over IP. The satellite trucks still exist, but they have moved from being the default to being the backup, reserved for Tier 1 events where absolute certainty is required or for locations where cellular coverage simply does not exist.

This transition saved broadcasters enormous amounts of money. But it also introduced a new category of problems that did not exist in the satellite era, and those problems are networking problems.

Bonded cellular: how it actually works

The core technology enabling IP-based broadcast contribution is bonded cellular. The concept is straightforward: take multiple SIM cards from multiple cellular carriers, aggregate their bandwidth, and treat the combined connection as a single transport pipe. A typical field unit holds six to eight SIMs across three or four different carriers. If one carrier's network is congested or drops entirely, the others compensate. The aggregate bandwidth is the sum of all active connections, minus overhead.

In practice, a well-configured bonded cellular unit in an urban area with good multi-carrier coverage delivers 40-80 Mbps of usable uplink bandwidth. That is comfortably enough for one or two HD contribution feeds, with headroom for return video (so the reporter can see the studio output) and comms.

The bonding happens at both ends. The field unit splits the encoded video stream across all available cellular connections using a proprietary or standards-based bonding protocol. At the receive end (typically an aggregation server in a data center or at the broadcaster's facility) the fragments are reassembled into a single coherent stream. Error correction, packet reordering, and jitter buffering all happen at this aggregation point.

The critical design decisions are not about the bonding hardware itself. They are about SIM strategy (which carriers, which plans, how you handle data caps and fair-use throttling), aggregation server placement (latency to the production facility matters), and monitoring (knowing which carriers are performing and which are not, in real time, across potentially dozens of simultaneous field deployments).

The failure mode that catches people out is cellular congestion at large events. Your bonded unit works perfectly during the site survey on a Tuesday afternoon. On match day, with 60,000 fans streaming, uploading, and video-calling simultaneously, every carrier in range is saturated. Your aggregate bandwidth drops from 60 Mbps to 8 Mbps. That is below your encoding bitrate. The feed breaks up, or the encoder drops to a lower quality profile that the production gallery did not expect and cannot use.

Mitigating this requires either dedicated cellular capacity (carrier relationships and pre-arranged dedicated bearers) or a hybrid approach where bonded cellular is supplemented by a dedicated circuit. Fiber if available, satellite if not. The point is that it has to be designed for the worst case, not the Tuesday afternoon.

SRT: why it won

Secure Reliable Transport started as a proprietary protocol developed by a single vendor and was open-sourced in 2017. Within five years it became the dominant transport protocol for live video over IP. It won for specific technical reasons, not marketing.

SRT operates over UDP, which means it avoids the head-of-line blocking problem that makes TCP unsuitable for real-time video. It implements its own retransmission and error correction layer on top of UDP, using Automatic Repeat reQuest (ARQ) to selectively retransmit lost packets. The key innovation is the configurable latency buffer: you tell SRT how much latency you can tolerate, and it uses that buffer to absorb jitter and recover from packet loss. Set the buffer to 120ms and SRT can recover from transient losses of up to 120ms duration. Set it to 500ms and it can handle significantly worse network conditions, at the cost of higher end-to-end latency.

This tunable latency-versus-resilience tradeoff is what makes SRT practical for broadcast. A studio-to-studio link over managed fiber can run SRT with a 40ms buffer. A bonded cellular contribution from a stadium might use 250ms. A satellite backhaul path with 600ms of inherent propagation delay needs a larger buffer still. The same protocol works across all of them, with the operator adjusting a single parameter to match the transport conditions.

SRT also handles encryption (AES-128 or AES-256), which matters for rights-protected content. And it provides detailed statistics (round-trip time, packet loss, retransmission rate, available bandwidth) that are invaluable for monitoring and troubleshooting live feeds.

RIST (Reliable Internet Stream Transport) is the other protocol in this space, developed by the Video Services Forum as a multi-vendor standard. RIST offers similar capabilities and is technically sound. But SRT had the head start, the open-source momentum, and broader encoder/decoder support. In practice, most broadcast IP deployments we design use SRT as the primary transport, with RIST as an alternative where specific vendor ecosystems require it.

Remote production: REMI and at-home models

Remote production (also called REMI (Remote Integration Model) or at-home production) is the logical extension of IP contribution. Instead of sending an entire production crew and a mobile production unit (an OB truck) to the venue, you send only the cameras, audio, and a minimal technical crew. The production gallery (the vision mixer, graphics, replay, audio mixing) sits in a permanent facility that might be hundreds or thousands of miles away.

This changes the connectivity requirement fundamentally. Instead of one or two contribution feeds from the venue back to base, you now need to transport multiple isolated camera feeds, multiple audio channels, talkback (bidirectional comms between the director and the on-site crew), graphics data, and tally signals. The bandwidth requirement jumps from 15-20 Mbps for a single encoded contribution feed to potentially 200-500 Mbps for a multi-camera remote production, depending on the number of cameras and whether you are transporting compressed or uncompressed video.

For high-end remote production (premium sports, major entertainment shows) the video transport shifts from compressed SRT to uncompressed or lightly compressed formats based on SMPTE ST 2110. ST 2110 is a suite of standards that defines how to transport professional media (video, audio, ancillary data) as separate essence flows over IP. It requires dedicated, managed network infrastructure with precise timing (PTP (Precision Time Protocol, IEEE 1588) synchronization across all devices to within microseconds. This is not something you run over the public internet. It requires dedicated dark fiber or wavelength services between the venue and the production facility, with guaranteed bandwidth and controlled latency.

)

The network design for REMI sits on a spectrum. At the lighter end, a news bureau doing daily live hits back to a central newsroom needs a reliable SRT path with modest bandwidth. At the heavier end, a premium sports broadcaster doing a 12-camera remote production of a live event needs a dedicated ST 2110 network with PTP timing, redundant paths, and hitless failover. We design across that entire spectrum, and the first question is always the same: how many sources, what quality, and what is the acceptable recovery time if something fails?

NDI and production-side networking

NDI (Network Device Interface) is worth addressing because it appears in almost every broadcast networking conversation, and there is persistent confusion about where it fits.

NDI is a production-side protocol, not a contribution protocol. It is designed for moving video between devices within a production facility or a local network. Between a camera and a vision mixer, between a graphics system and a playout server, between a replay system and a monitor wall. It uses mDNS for device discovery and can run over standard gigabit Ethernet, which makes it remarkably easy to deploy compared to SDI cabling or ST 2110.

NDI works well within its design parameters: local networks with adequate bandwidth and low, consistent latency. It does not work well over wide-area networks, through firewalls, or across network boundaries. It was not designed for that. Attempting to use NDI as a contribution protocol over a WAN is a recurring mistake we see in organizations that are new to IP-based production. It works in the demo room on a flat network. It falls apart the moment you introduce real-world routing, NAT, or variable latency.

The correct architecture uses NDI within the production facility (where it excels at simplifying device interconnection) and SRT or ST 2110 for transport between locations. Treating them as complementary protocols rather than competitors eliminates an entire category of deployment failures.

Temporary event networks

Some of the most challenging broadcast connectivity projects are temporary deployments. A music festival, a sporting event, a one-off broadcast from a location that has no existing infrastructure. You arrive at a field, a parking lot, or a venue that has never hosted a broadcast, and you need a production-grade network operational within hours.

The design constraints for temporary event networks are severe. There is no fiber. There may or may not be usable cellular coverage once the site fills up. Power may be generator-fed and unreliable. The equipment has to survive weather, dust, crowds, and the general chaos of a live event build. And it has to be deployed by a crew that is simultaneously building staging, rigging cameras, and running audio. The network is one of twenty things they are setting up, not the only thing.

Our approach to temporary broadcast networks starts with the connectivity layer: bonded cellular as primary (with SIM strategy designed for the specific location and expected crowd density), supplemented by satellite as a guaranteed-bandwidth backup. We pre-configure the entire network stack (switches, routers, wireless access points, bonding units, SRT encoders/decoders) and ship it as a single rack or set of pelican cases with color-coded cabling and a laminated setup guide. The goal is a four-hour deploy time from truck arrival to live feed, executed by a two-person crew that is competent but not specialized in networking.

This is where design philosophy matters. Every component choice, every cable label, every default configuration is made with the question: can a tired technician at 5am, in the rain, with no internet access for looking up documentation, get this working? If the answer is no, the design is wrong. Broadcast networks are operated under pressure by people who have other priorities. The technology has to accommodate that reality, not ignore it.

For multi-day events, the temporary network also needs to support production offices, press facilities, and sometimes public Wi-Fi. All on the same physical infrastructure but properly segmented so that a journalist's laptop running a Windows update does not compete with the live broadcast feed for bandwidth. VLANs, QoS policies, and traffic shaping are not optional in these environments. They are the difference between a broadcast that works and one that drops out during the headliner's set.

Satellite: backup, not legacy

We are sometimes asked whether satellite is dead for broadcast. It is not. What has changed is its role.

Satellite remains the right primary choice in specific scenarios: locations with no terrestrial connectivity, maritime broadcast (ships at sea with no cellular coverage. we cover this in depth on our maritime page), and ultra-high-reliability requirements where the broadcaster needs a transmission path that is completely independent of terrestrial infrastructure.

For everything else, satellite has moved into the backup role. The architecture we design most frequently uses bonded cellular or dedicated fiber as the primary contribution path, with a satellite terminal (typically a flyaway VSAT or a compact Ka-band terminal) available as a hot standby. The SRT protocol makes this failover clean: both paths can be active simultaneously, with the receive end selecting the better-performing source. If the primary path degrades, the switch to satellite is seamless from the production gallery's perspective.

The cost model reinforces this architecture. A bonded cellular contribution costs a fraction of a satellite booking. Using cellular as the default and satellite as the exception lets broadcast organizations cover more events with the same budget, reserving satellite spend for the situations where it is genuinely needed.

What we deliver for broadcast organizations

We work with broadcasters and production companies at every stage of the IP transition. For organizations still primarily satellite-based, we design the migration path: pilot deployments, side-by-side comparisons, training programs, and the gradual buildout of bonded cellular capability. For organizations already operating on IP, we optimize: SIM strategy refinement, aggregation server architecture, monitoring infrastructure, and the design of hybrid satellite-plus-cellular failover systems.

For remote production initiatives, we design the end-to-end network architecture from venue to production gallery, whether that means a compressed SRT workflow over managed internet or a full ST 2110 deployment over dedicated fiber with PTP synchronization.

For temporary and event-based broadcasting, we design deployable network packages that are pre-configured, field-proven, and operable by production crews rather than network engineers. We handle the SIM procurement, carrier relationships, aggregation server provisioning, and monitoring setup so that the production team can focus on making television.

We also work closely with live event production companies where the broadcast element is one part of a larger event infrastructure. The connectivity requirements for the broadcast feed, the production network, the artist/talent network, and the public-facing infrastructure all need to coexist on shared physical plant without interfering with each other. Getting that segmentation right is the difference between a smooth event and a midnight crisis call.

Need broadcast connectivity that works under pressure?

Whether you are planning a satellite-to-IP migration, designing a remote production workflow, or building a deployable event network, we have done it before and we can walk you through exactly what it takes.

Talk to us