For decades, live broadcast meant satellite trucks. A massive vehicle, a trained operator, a satellite booking costing thousands per hour, and a narrow window to get your content from the field to the studio. It worked. It was expensive, inflexible, and required specialist knowledge that took years to develop. But it worked. The signal went up to a transponder 36,000 kilometers above the equator and came back down at the studio. Physics guaranteed the link. The economics guaranteed that only well-funded broadcasters could afford it.

That era is ending. Bonded cellular, SRT, RIST, and increasingly capable IP infrastructure are replacing satellite for the majority of live broadcast contribution. Not all of it (there are still use cases where satellite is the right answer, and anyone who tells you otherwise hasn't worked in enough remote locations) but the balance has shifted decisively. The inflection point happened when the cost of a bonded cellular unit dropped below the cost of a single satellite truck deployment, and the quality became good enough that viewers couldn't tell the difference.

Having designed and deployed broadcast IP solutions across multiple countries and continents, here's what the transition actually looks like. Not the vendor pitch. The reality.

What changed

Several developments converged to make broadcast-grade IP contribution viable, and understanding which ones matter most helps explain where the technology works well and where it still falls short.

4G/5G density. Urban and suburban cellular coverage reached the point where bonding multiple carriers could deliver aggregate bandwidth exceeding satellite uplinks. In most major cities, you can reliably aggregate 50-100 Mbps of uplink across four or five SIMs. In areas with good 5G coverage, individual carriers can deliver 50+ Mbps uplink on a single connection. That's more than enough for broadcast-quality 1080p50 or even 4K contribution, provided the encoding is efficient.

SRT (Secure Reliable Transport). Developed originally by Haivision and released as open source, SRT is a transport protocol designed specifically for live video over unreliable networks. It handles packet loss through ARQ (Automatic Repeat reQuest) retransmission, manages jitter through configurable latency buffers, and handles out-of-order delivery while maintaining low end-to-end latency. SRT gave broadcasters a transport protocol that could work over the public internet with predictable quality. Before SRT, sending broadcast-grade video over the internet meant either accepting unpredictable quality or building expensive private networks.

RIST (Reliable Internet Stream Transport). An alternative to SRT, developed by the Video Services Forum as a multi-vendor standard. RIST uses a similar ARQ-based approach to SRT but was designed from the ground up as an interoperable standard rather than a single-vendor project that was later opened. The protocol has two profiles: Simple Profile handles basic error correction, Main Profile adds encryption, authentication, and tunneling. RIST's advantage is vendor interoperability. A RIST encoder from one manufacturer should work with a RIST receiver from another without compatibility headaches.

Encoder miniaturization. Broadcast-quality H.265/HEVC encoding in a backpack-sized unit, capable of pushing 1080p50 at low latency over bonded cellular connections. The latest generation of field encoders from LiveU, Dejero, and TVU weigh under two kilograms and can be camera-mounted. No truck required. No satellite booking. No two-person crew just for the transmission path.

How the architecture works

A typical bonded cellular broadcast setup has three layers, and understanding each one is essential for designing a reliable system.

The field unit. A portable encoder/bonder with multiple SIM slots (typically 6-8, sometimes up to 14 on the high-end units) across different cellular networks. The unit takes SDI or HDMI input from the camera, encodes it in real-time using H.265 or H.264, and distributes the encoded bitstream across all available cellular connections using a proprietary bonding algorithm. If one carrier drops or degrades, the others compensate automatically. The field unit does the heavy lifting. Encoding, bonding, forward error correction, and protocol encapsulation.

The bonding algorithm is where the vendors differentiate. LiveU's LRT (LiveU Reliable Transport) protocol, Dejero's HEVC encoding and adaptive bitrate bonding, and TVU's IS+ (Inverse StatMux Plus) all take slightly different approaches to the same fundamental problem: how to distribute a real-time video stream across multiple unreliable network paths and reconstruct it perfectly on the other end. The differences matter at the margins. Under severe network congestion, in high-packet-loss environments, and when latency requirements are tight.

The aggregation server. Located in a data center or at the broadcaster's facility, this server receives the bonded streams, reassembles them, applies error correction, and outputs a clean SDI or IP feed (typically SRT or NDI) to the production gallery. This is the "other end" of the bonded connection. It's what makes bonding work, because the field unit and aggregation server coordinate to reconstruct the original stream from fragments that arrived via different paths at different times. Without the server, the field unit is just an encoder.

Server placement matters. The aggregation server needs to be geographically close to the cellular networks the field unit is using, or the round-trip time for ARQ retransmission requests becomes too long and quality degrades. For domestic broadcast, a server in a national data center works. For international broadcast, you either need a server in each country of operation or a cloud-based aggregation service. Which all three major vendors now offer as a managed service.

The management layer. A web-based dashboard that lets the operations team monitor all active feeds, see real-time bandwidth per carrier, adjust encoding parameters, and switch between presets. For a broadcaster running multiple simultaneous live feeds from different locations, this visibility is essential. You need to know, in real time, which feeds are healthy, which are degraded, and which are about to fail. LiveU Central, Dejero Control, and TVU Partyline all provide this, with varying degrees of sophistication.

LiveU, Dejero, TVU: an honest comparison

The bonded cellular market has consolidated around these three vendors, and each has genuine strengths and weaknesses.

LiveU has the largest installed base and the most mature product line. Their LU800 is the flagship field unit, supporting up to 14 connections (cellular, Wi-Fi, Ethernet, satellite) with internal 5G modems. LiveU's strength is reliability at scale. Their bonding algorithm has been refined over more deployments than anyone else's, and their cloud-based server infrastructure is well-distributed globally. The weakness: LiveU's ecosystem is relatively closed. You're using LiveU field units with LiveU servers, managed through LiveU's portal. Interoperability with non-LiveU equipment works through SRT or RTMP output, but the bonding itself is proprietary.

Dejero positions itself more strongly in the managed network space, with products designed for both broadcast and enterprise connectivity. Their EnGo field unit is compact and well-regarded, and their HEVC encoding is excellent for bandwidth-constrained environments. Dejero's distinguishing feature is their Smart Blending Technology, which bonds not just cellular but also managed network connections for fixed installations. The weakness: smaller installed base means fewer real-world deployment scenarios to draw on, and their cloud infrastructure isn't as geographically distributed as LiveU's.

TVU Networks has carved out a niche with their rack-mount products for fixed installations and their focus on the production workflow rather than just the transport. TVU Partyline enables cloud-based multi-party production, which is useful for remote production workflows where the production team isn't co-located. The weakness: TVU's field units have historically been larger and heavier than the competition, though the latest models have closed this gap.

The honest assessment: for pure field-to-studio contribution with maximum reliability, LiveU has the edge. For deployments that blend broadcast contribution with managed enterprise connectivity, Dejero is worth evaluating. For workflows that integrate contribution with cloud-based production, TVU's ecosystem is compelling. All three produce broadcast-quality results when properly deployed, and the differences between them are less significant than the differences between a properly designed deployment and a poorly designed one with any vendor.

What actually goes wrong

The technology works. The problems are operational, and they're predictable if you know what to look for.

Cellular congestion at events. The biggest challenge isn't coverage. It's capacity. When you're broadcasting live from a stadium with 60,000 people all using their phones, the cellular networks are saturated. Your bonded connection is competing with every Instagram live stream, TikTok upload, and group chat photo in the venue. During the pre-match period, you might have 80 Mbps aggregate uplink. By kickoff, you might have 15. During a goal, you might have 5. This is predictable and must be planned for.

Mitigation strategies include dedicated cellular allocations (some carriers offer guaranteed bandwidth SLAs for broadcast use at events), on-site small cells or COWs (Cell On Wheels) provided by the carrier, pre-positioned Wi-Fi backhaul from the venue's fiber infrastructure, and encoding profiles that gracefully degrade as bandwidth drops. The worst approach is assuming the cellular environment at a sold-out stadium will behave like the cellular environment in your office parking lot during testing.

International SIM management. Broadcasting across multiple countries means managing SIM cards across dozens of carriers. Roaming rates, fair-use policies, data caps, activation delays, and carrier-specific APN configurations all become operational headaches. A broadcast that works perfectly in London may hit a data cap three hours into a Spanish football match because the roaming agreement only covers 20GB before throttling. Or the SIMs you provisioned for a German event don't activate because the carrier's provisioning system takes 48 hours and you arrived yesterday.

The solution is centralized SIM management. Some organizations use specialist SIM providers like Pangea or BICS that offer multi-carrier, multi-country SIMs with predictable data allowances and no fair-use surprises. Others maintain a library of local SIMs for each country they operate in, managed through a tracking system that monitors activation status, data usage, and expiry dates. Either approach works; what doesn't work is arriving at a venue in a foreign country with SIMs you haven't tested on the local networks.

Latency expectations. Bonded cellular contribution adds latency compared to satellite. A well-configured bonded cellular link typically runs 0.5 to 4 seconds of glass-to-glass latency, depending on encoding settings, bonding buffer size, and network conditions. SRT adds its own configurable latency buffer on top. For a typical news contribution, 2-3 seconds is acceptable. For a live sports event where the commentator needs to react to what's happening on-screen, more than a second starts to create awkward timing gaps.

The latency is tunable. Lower latency means less buffer for error correction, which means the link is more sensitive to packet loss and network jitter. Higher latency means more buffer, which means better resilience but more delay. Getting this balance right for each deployment (not for every deployment, but for each specific use case) is where operational expertise matters.

Operator training. Satellite trucks were operated by specialists who spent years learning the craft. Bonded cellular units are often handed to camera operators or production assistants with minimal training. The technology is simpler, but it still needs someone who understands what the LED indicators mean, what to do when bandwidth drops below the encoding threshold, when to switch from auto to manual encoding profiles, and when to escalate to the engineering team. A 30-minute hands-on training session and a one-page quick-reference card solves 90% of operational issues. The remaining 10% needs someone who actually understands the system architecture.

The biggest mistake in broadcast IP transitions isn't technical. It's assuming that simpler technology means zero training. The failure mode is different from satellite, but it still exists. Satellite failed obviously. You either had a signal or you didn't. Cellular degrades gradually, and recognizing the signs of degradation before it becomes visible to the viewer is a skill that needs teaching.

RIST vs SRT in depth: when RIST is the better choice

The SRT vs RIST discussion in broadcast engineering circles often devolves into tribalism. SRT advocates point to its massive installed base, open-source maturity, and the SRT Alliance's vendor adoption. RIST advocates point to its standards-body pedigree and multi-vendor interoperability guarantees. Both sides are partly right, and the practical answer depends on the specific deployment scenario.

SRT's strength is simplicity. A single protocol handles encryption, error correction, and connection management. The caller/listener model is easy to understand and configure. The open-source reference implementation (libsrt) is well-maintained and has been battle-tested across millions of streams. For point-to-point contribution where you control both ends of the link and both ends use the same vendor's equipment, SRT is the obvious choice. It works, it's well-understood, and every broadcast encoder and decoder on the market supports it.

RIST becomes the better choice in several specific scenarios. The first is multi-vendor environments where interoperability is non-negotiable. RIST was designed by the Video Services Forum with interoperability as the primary goal. The conformance testing program means that a RIST Main Profile encoder from Cobalt Digital should work with a RIST receiver from Appear TV without testing, without firmware matching, without the "which version of the protocol are you running?" conversation that occasionally plagues SRT deployments between different implementations. If you're a broadcaster receiving feeds from multiple external sources (news agencies, freelance crews, partner broadcasters) mandating RIST gives you better odds of clean interoperability than mandating SRT, particularly with less common equipment vendors whose SRT implementation may have quirks.

The second scenario is multi-path bonding without a proprietary bonding layer. RIST Main Profile supports native bonding, sometimes called "seamless switching" or "hitless failover" between multiple network paths. You can send the same RIST stream over two different ISP connections, and the receiver will use both paths simultaneously, transparently handling the loss of either one. SRT, by itself, doesn't do this: it's a single-path protocol. Achieving multi-path redundancy with SRT requires an external bonding layer, a load balancer, or application-level logic to manage multiple SRT streams. For fixed installations like studio-to-transmitter links or inter-facility feeds where you want path diversity without a proprietary appliance in the middle, RIST's native bonding is genuinely useful.

The third scenario is environments where you need tunneling and NAT traversal across complex network topologies. RIST Main Profile includes a tunneling mechanism that encapsulates the video stream inside a GRE tunnel, which simplifies firewall traversal for inbound feeds. SRT's rendezvous mode addresses some NAT traversal issues, but in enterprise environments with strict firewall policies, RIST's tunneling approach can be easier to get approved by the security team because it looks like standard GRE traffic rather than an arbitrary UDP connection.

Where RIST falls short is ecosystem maturity. SRT has a larger installed base, more tooling, better monitoring integrations, and a more active open-source community. When something goes wrong with an SRT stream, you can pull up srt-live-transmit, analyze the statistics socket output, and diagnose the issue with well-documented tools. RIST's diagnostic tooling is improving but isn't as mature. For organizations that are standardizing on a single protocol and don't have specific multi-vendor or multi-path requirements, SRT remains the safer choice.

The 5G broadcast promise: where it actually stands

5G Broadcast (technically 3GPP Release 16's FeMBMS (Further evolved Multimedia Broadcast Multicast Service) and its successors in Release 17) has been generating conference presentations and press releases for years. The proposition is compelling: use the 5G cellular network as a broadcast distribution platform, delivering the same content to millions of devices simultaneously without consuming per-user bandwidth. It combines the best of cellular (ubiquitous devices, no special receiver hardware) with the best of broadcast (one-to-many efficiency, no capacity scaling problem).

The reality is considerably behind the marketing. As of mid-2026, 5G Broadcast has seen trial deployments at major sporting events. The Rohde & Schwarz and Qualcomm trials at various European football championships demonstrated the technology working in controlled conditions. BMW has run trials using 5G Broadcast for software updates to vehicles. Several European public broadcasters have participated in the 5G-MAG (Media Action Group) initiative to test the technology for live event coverage and emergency broadcasting.

But production deployments are scarce. The barriers are structural, not technical. Carriers need to allocate spectrum for 5G Broadcast, which means dedicating bandwidth that could otherwise serve revenue-generating unicast mobile data. The business model is unclear. Who pays for the broadcast infrastructure, the broadcaster or the carrier? Consumer devices need chipset and software support for the broadcast receive mode, and most current 5G handsets lack it. The Qualcomm Snapdragon X65 and later modems have the hardware capability, but the software stack to receive and render 5G Broadcast signals is not enabled by default on most devices. Getting it activated requires coordination between the device manufacturer, the operating system vendor, and the carrier. That's a lot of parties who all need to agree it's worth doing.

For broadcast contribution (getting feeds from the field to the studio) 5G Broadcast is largely irrelevant. Contribution is a point-to-point or point-to-few problem, not a one-to-many problem. Where 5G Broadcast could be transformative is in distribution: getting content to audiences in venues, at events, or in transit. Imagine every fan in a stadium receiving a multi-angle replay on their phone, delivered via 5G Broadcast without consuming any unicast cellular capacity. That's a real use case. It's just not a use case that's available for production deployment yet.

The honest assessment for anyone planning broadcast infrastructure right now: design your systems around bonded cellular for contribution and standard CDN/OTT delivery for distribution. Track 5G Broadcast developments. Ensure your encoding and packaging infrastructure can support CMAF (Common Media Application Format) and DASH, which are the likely delivery formats for 5G Broadcast content. But don't hold off on investment waiting for 5G Broadcast to arrive. The timeline for widespread availability remains uncertain, and the technology you need to build reliable broadcast IP infrastructure today is already mature.

Cost modeling: satellite vs cellular for different deployment profiles

The cost comparison between satellite and bonded cellular isn't a single number. It varies dramatically based on how frequently you deploy, where you deploy, and what quality level you need. Organizations that model this incorrectly (usually by comparing the per-event cost of a satellite truck against the per-event data cost of a cellular unit) miss the structural differences in how the costs accumulate.

For high-frequency deployers (200+ live events per year, typical of a national news operation), bonded cellular wins decisively. The capital cost of a fleet of cellular units is amortized across so many deployments that the per-event hardware cost becomes negligible. Data costs scale linearly with usage, but at 20-50GB per live event (assuming 1080p H.265 at 5-10 Mbps for a 2-4 hour broadcast), the monthly data spend per unit is manageable even on commercial carrier plans. The total annual cost per unit (hardware amortization, data, maintenance, cloud aggregation service fee) typically works out to a small fraction of what a satellite truck operation costs per event when you factor in the vehicle, the operator, fuel, satellite booking fees, and transponder time.

For low-frequency deployers (10-30 events per year, typical of a regional broadcaster or a corporate communications team), the calculation is different. The capital cost of the cellular units is still significant relative to the number of deployments. But the comparison isn't against owning a satellite truck. It's against hiring one. A dry hire of a satellite uplink truck with operator for a day costs significant money. A bonded cellular unit that you own and deploy yourself costs the data. Even at low utilization, the cellular unit is almost always cheaper per event than truck hire, and dramatically more flexible.

Where the calculation gets genuinely complex is for deployments in locations with poor cellular coverage. If your deployment profile includes remote locations where cellular bonding doesn't work and you need satellite anyway, then the question becomes whether you're maintaining two parallel systems (cellular for most events, satellite for some) or standardizing on satellite for everything. Maintaining two systems has operational overhead. Training, spare equipment, different operational procedures. But the cost saving on the majority of deployments that can use cellular usually justifies the dual-system approach for organizations large enough to absorb the operational complexity.

One cost that's frequently overlooked is the aggregation server. LiveU, Dejero, and TVU all offer cloud-based aggregation as a managed service, charged either per unit per month or per hour of usage. For a fleet of 20 cellular units, the annual cloud aggregation fee is a real line item. The alternative is running your own on-premise server infrastructure, which trades recurring fees for capital expenditure and operational responsibility. Organizations with existing data center capacity and broadcast engineering teams tend to prefer on-premise. Organizations without that infrastructure tend to prefer the managed service. Neither is wrong; both need to be in the cost model.

Monitoring and NOC operations for multi-feed cellular broadcasts

Running a single bonded cellular feed from a news reporter in the field is operationally simple. Running fifteen simultaneous feeds from different locations during a weekend of football matches is an entirely different challenge. The monitoring and NOC (Network Operations Center) requirements for multi-feed cellular broadcast operations are frequently underestimated by organizations making the transition from satellite.

With satellite, monitoring was relatively binary. The link was either up or it wasn't. The signal-to-noise ratio was either acceptable or it wasn't. The satellite operator at each truck was responsible for their own link, and the NOC's job was to receive the feeds and confirm quality. With cellular, the monitoring is continuous and multi-dimensional. Each feed has its own bandwidth profile that fluctuates in real time. Each carrier connection within each feed has its own health status. Encoding quality adapts dynamically to available bandwidth. The NOC needs to watch all of this simultaneously and make proactive decisions (switching to a backup encoder, calling the field operator to adjust settings, alerting the production team that a feed is about to degrade) before the problem becomes visible on air.

The vendor dashboards (LiveU Central, Dejero Control, TVU Command Center) are essential but insufficient for a mature NOC operation. They show you the health of your cellular units, but they don't integrate with your broader production monitoring. The NOC needs a single-pane view that shows cellular feed health alongside studio router status, playout server health, and transmission chain status. This typically requires integrating the cellular vendor's API output into your existing broadcast monitoring platform. Tools like TAG Video Systems, Phabrix, or Telestream iQ. The integration work is non-trivial but necessary for organizations running more than a handful of simultaneous feeds.

Alerting thresholds need careful tuning. A bonded cellular feed that drops from 50 Mbps aggregate to 20 Mbps is not an emergency (the encoder will adapt and the quality will decrease slightly but remain broadcast-acceptable. A feed that drops from 20 Mbps to 5 Mbps is a warning) quality is degrading noticeably and the operator needs to investigate. A feed that drops below 3 Mbps is critical (the encoding will produce visible artifacts and the production team needs to know immediately. Setting these thresholds correctly, and tuning them for different event types and quality requirements, is operational work that doesn't happen by itself.

)

Staffing is the other dimension. A satellite truck operation required skilled operators in the field but minimal NOC intervention once the link was established. A multi-feed cellular operation requires less skill in the field (the units are simpler) but more active NOC management. The net headcount may be similar, but the skills are different. NOC operators who've spent their careers monitoring satellite feeds need retraining for cellular monitoring, where the indicators of impending failure are more subtle and the remediation options are more varied.

The return video problem

Most of the broadcast IP discussion focuses on contribution. Getting video from the field to the studio. But many broadcast workflows require video going the other way as well. Getting studio output back to the field is a problem that bonded cellular handles poorly, and it catches organizations off guard.

The most common return video requirement is confidence monitoring: the reporter or presenter in the field needs to see the program output so they know when they're on air, what the audience is seeing, and what's coming next in the running order. In a satellite workflow, the return video was typically delivered via a separate satellite receiver at the truck. The same satellite that carried the contribution uplink also carried a downlink feed of the program output. The latency was high (600ms+ for GEO satellite) but consistent and predictable.

With bonded cellular, there's no natural return path. The bonding architecture is asymmetric. It's designed to aggregate uplink bandwidth for contribution, not to receive high-bandwidth downlink feeds. The field unit's cellular connections are optimized for upload, and on most cellular networks, the download direction shares the same spectrum resources. Attempting to send a high-quality return feed to the field unit's IP address runs into NAT traversal problems, asymmetric routing, and contention with the outbound contribution stream for the same cellular bandwidth pool.

The practical solutions vary in complexity and quality. The simplest is to use a separate, low-bandwidth SRT or RTMP stream sent to the field unit over its existing cellular connection. This works for audio-plus-reduced-quality-video confidence monitoring, but it consumes some of the uplink-direction bandwidth and adds latency. LiveU and Dejero both offer return video features in their platforms, but the quality is intentionally limited to protect the primary contribution feed.

A better approach for workflows that need reliable return video is to provision a separate return path entirely. A dedicated 4G/5G router at the field position, not bonded with the contribution unit, receives a low-latency SRT stream from the studio. This keeps the return video traffic completely separate from the contribution path. The downside is additional equipment and another set of SIM cards to manage. The upside is that the return feed doesn't compete with contribution for bandwidth and can be engineered for the lowest possible latency.

For IFB (interruptible fold-back) audio (the talkback from the studio to the presenter's earpiece) latency is more critical than video quality. Many operations send IFB over a standard mobile phone call or a VoIP application on a separate device, keeping it entirely independent of the IP video infrastructure. This is pragmatic and reliable, though it means the presenter is juggling multiple devices and audio sources.

The return video problem is one reason why remote production over bonded cellular alone remains impractical for full production workflows. The production team at the studio needs to send tally signals, intercom, mix-minus audio, graphics triggers, and confidence feeds back to the venue. Each of these has latency requirements that bonded cellular struggles to meet simultaneously while also carrying the contribution feeds. This is where dedicated fiber or managed circuits for the venue-to-studio path remain essential, with bonded cellular serving as backup or as the contribution path for secondary camera positions that don't need full return video.

SRT and RIST: protocol configuration in practice

Understanding these protocols at a technical level matters because they're the foundation on which broadcast-quality IP transport depends, and choosing between them (or using both) is a design decision that affects interoperability, resilience, and operational complexity.

SRT operates over UDP and uses a caller/listener model. The caller initiates the connection; the listener waits for incoming connections. SRT's error correction uses ARQ. When the receiver detects a missing packet, it requests retransmission. The configurable latency parameter determines how much time is available for retransmission attempts. Set it too low and packet loss becomes visible as artifacts. Set it too high and the delay becomes operationally problematic. Typical broadcast deployments use 120-500ms of SRT latency, depending on the network path quality.

SRT also supports AES-128 and AES-256 encryption, which matters for content security. Sending unencrypted broadcast feeds over the public internet is a compliance issue for most premium content rights holders. The encryption adds negligible latency and should always be enabled for production feeds.

RIST takes a similar approach to error correction but adds support for bonding at the protocol level (Main Profile), meaning you can aggregate multiple network paths natively without a proprietary bonding layer. This is attractive for fixed installations where you're sending a feed from a studio to a remote production facility over multiple ISP connections. RIST's interoperability guarantees also mean you can mix and match equipment vendors more freely, which reduces vendor lock-in for the transport layer.

In practice, most bonded cellular deployments use the vendor's proprietary transport between the field unit and aggregation server (because the bonding algorithm is the vendor's core IP), and then output SRT or RIST from the server for onward distribution. The protocol choice for the last mile (from server to production gallery) is where SRT vs RIST actually matters to the end user.

Remote production: the network architecture that makes it work

The shift from satellite to cellular is part of a larger trend: remote production, where the production team is located at a central facility rather than at the event venue. Instead of sending a full production crew, commentators, and an OB truck to every match, you send cameras and operators to the venue and backhaul all the feeds to a central production hub where the director, vision mixer, graphics operator, and audio engineer produce the output.

Remote production fundamentally changes the network requirements. Instead of a single program feed going from venue to studio, you need multiple camera feeds, multiple audio channels, intercom, tally signals, graphics data, and production talkback all flowing in both directions simultaneously. The aggregate bandwidth requirement is significantly higher than a single contribution feed, and the latency tolerance is tighter because the director needs to see and cut between cameras in near-real-time.

The network architecture for remote production typically involves a dedicated high-bandwidth path from the venue (fiber where available, bonded cellular as backup) carrying uncompressed or lightly compressed video using NDI, SMPTE 2110, or compressed using JPEG XS (which offers visually lossless compression at sub-frame latency). The return path carries mix-minus audio, intercom, and tally. The entire system needs to operate with end-to-end latency under one frame (20ms for 50fps content) for the production team to work naturally.

This is where the "just use bonded cellular for everything" narrative breaks down. Bonded cellular can handle contribution. It cannot, by itself, handle the full bidirectional requirements of a remote production workflow at the latency levels the production team needs. Remote production requires either fiber (ideal), dedicated managed circuits (acceptable), or a hybrid approach where the primary production feeds use wired infrastructure and secondary feeds use bonded cellular as a complement.

When satellite is still the right answer

Cellular-based broadcast IP doesn't replace satellite everywhere. There are scenarios where satellite remains the better choice, and pretending otherwise leads to failed deployments.

Remote locations with no cellular coverage. Rural areas, wilderness events, open ocean, desert deployments. If there's no cellular signal, bonding can't help. For the Dakar Rally, for yacht racing in the Southern Ocean, for wildlife documentaries in national parks. Satellite is often the only option. LEO satellite constellations like Starlink are changing this equation by providing broadband in locations that were previously satellite-only, but Starlink's uplink speed and latency profile don't yet match what's achievable with bonded cellular in coverage areas, and the service's contention-based nature means you can't guarantee bandwidth when you need it.

Extreme reliability requirements. National emergency broadcasts, military communications, scenarios where "best-effort" is not acceptable at any level. Satellite provides a deterministic, managed link with a defined SLA. Cellular provides a probabilistic link whose quality depends on factors outside your control. For most broadcast applications, "very reliable" is sufficient. For some, only "guaranteed" will do.

Massive simultaneous distribution. Satellite is inherently one-to-many. A single uplink reaches every receiver in the footprint simultaneously with zero additional cost per receiver. If you need to distribute a feed to hundreds of receive sites simultaneously (a common requirement in network distribution to affiliate stations) satellite's multicast architecture is more efficient than unicast IP, where each additional receiver requires its own stream.

Venues with known cellular problems. Some locations are predictably bad for cellular: deep inside buildings with reinforced concrete, basements, valleys with poor line-of-sight to cell towers, or venues that are known to overwhelm local cellular infrastructure during events. If you know in advance that cellular will be problematic, satellite as the primary path (with cellular as backup) is the more reliable architecture.

The smart approach isn't satellite or cellular. It's designing a contribution architecture where the primary and backup paths use different technologies. Primary: bonded cellular. Backup: satellite or dedicated fiber. Or vice versa, depending on the specific deployment. The two technologies complement each other when designed as a resilient system rather than treated as competing alternatives. The organizations that get this right treat the transmission path as an engineering problem, not a purchasing decision.

Getting the transition right

The organizations that transition successfully from satellite to cellular share common characteristics that have nothing to do with which vendor they chose.

They pilot before they commit. Run bonded cellular alongside satellite for a season. Build confidence with real data from real events before decommissioning satellite trucks. Measure everything: latency, quality, reliability, operator feedback, cost per deployment. The data tells you where cellular works, where it doesn't, and where you need a hybrid approach. Committing to a full fleet transition based on a successful test in your office parking lot is a recipe for problems.

They invest in SIM management. A centralized SIM management platform that handles provisioning, monitoring, data usage, carrier relationships, and cost allocation across all deployments. This is operational infrastructure, not optional overhead. The organizations that treat SIM management as somebody's part-time responsibility end up with SIM cards that don't work when they're needed, bills that nobody can reconcile, and data caps that nobody noticed until mid-broadcast.

They design for failure. Every broadcast link should have a defined failover path. Primary: bonded cellular. Secondary: single-carrier 5G with SRT. Tertiary: satellite, Starlink, or store-and-forward. The fallback plan should be documented and tested before every deployment, not theoretical. What does the operator do when bandwidth drops below the minimum encoding threshold? When the aggregation server becomes unreachable? When all cellular carriers in the venue saturate simultaneously? If the answer to any of these is "I don't know," the deployment isn't ready.

They train their people. Not just the technical team. Everyone who will touch the equipment in the field needs to understand the basics: how to power on and connect, how to read the status indicators, what to do when the link degrades, and when to call for help. The training doesn't need to be long. It needs to be practical, hands-on, and refreshed periodically. The operator who used the unit six months ago and hasn't touched it since needs a refresher before the next deployment.

They measure and iterate. After every deployment, capture the data: what worked, what didn't, what bandwidth was achieved, what the latency profile looked like, what the operator experienced. Build a body of knowledge about which venues are reliable, which carriers perform best in which locations, and what encoding settings work for each type of deployment. This institutional knowledge is what separates organizations that achieve broadcast-grade reliability from ones that are perpetually surprised by problems they've seen before.

The broadcast IP revolution is real. The cost savings compared to satellite truck operations are substantial. The flexibility (deploying from locations where a truck couldn't reach, going live on shorter notice, scaling from one feed to twenty without proportional cost increase) is genuine. But it's a transition, not a switch-flip. The organizations that approach it as an engineering project (with proper design, testing, training, and continuous improvement) are the ones getting reliable broadcast-grade results. The ones that approach it as a cost-cutting exercise, deploying cheap units with minimal planning, get what they pay for.

Planning a broadcast IP transition?

Let's talk