How real-time is real-time streaming?

Real-time streaming, also known as ultra-low latency, is just that, a near-immediate broadcast of a live event. Real-time streaming is generally understood to have a latency of three seconds or less, and all being well, generic live broadcasting should not drop below the 35-second mark. However, many factors come into play once the data leaves the venue or host that may increase latency, including network congestion, response times, and buffering events, among others.

Today, not only the World Cup or the Super Bowl want to stream their events live for the world to see. For the last few years, social media platforms such as Instagram, Twitch, and TikTok have made it possible for users around the world to stream live content to their respective followers. Twitch even incorporated a “Low Latency” video setting for their users to help them “respond more quickly to their chat and [foster] closer interactions between broadcasters and their community.” 

As “live” becomes the norm and “real-time” takes precedence, how can content broadcasters and platforms ensure ultra-low latency and real-time streaming when delivering live content to viewers around the world?

What is latency?

In the realm of content delivery, latency denotes the time it takes for the live action and content recording to appear on the end user's screen. Commonly referred to as glass-to-glass (G2G) latency or end-to-end (E2E) latency, various factors impact this timeframe, including network congestion, encoding methods, and transmission protocols. Distinguishing between low and ultra-low latency depends on specific applications and their respective benchmarks. Generally, low latency is achieved when G2G latency sits below the five-second mark. Ultra-low latency, or real-time streaming, surpasses this by aiming for the shortest possible interval between content generation and delivery. This is a critical consideration in applications such as video conferencing and gaming, which often demand responsiveness within milliseconds.

How to achieve real-time streaming

Achieving real-time streaming, also known as ultra-low latency, is crucial for applications where even the slightest delay can disrupt the user experience. Unlike traditional live streaming, which often comes with between thirty seconds and almost a minute of latency, real-time streaming reduces this time to mere seconds. This is made possible thanks to Web Real-Time Communications (WebRTC), a collection of protocols, standards, and JavaScript APIs designed for real-time communication on the web. Originally intended for browser-based peer-to-peer connections, WebRTC has evolved to meet the growing demand for ultra-low latency, enabling rapid data streams tailored to larger and more complex needs in the world of streaming solutions.

Reducing latency with Edge Intelligence

System73’s live content delivery solution, Edge Intelligence, significantly reduces latency and operational costs for content providers. It does this principally by creating a content delivery tree and end-user peer-to-peer connections to distribute live content, thereby minimizing congestion and buffering events. The design of Edge Intelligence also leverages the WebRTC protocol to achieve a safe, cost-effective, and near real-time live content delivery. Traditional approaches to achieving low latency can require significant investment in hardware, networking, and storage, along with complex system management. However, Edge Intelligence is able to bypass these necessities by harnessing peer-to-peer technology and creating a centralized network. This network is then able to anticipate playback requests, a progressive delivery approach that preemptively distributes the most recent video segments to all end users, minimizing latency and potential buffering events and enhancing the overall quality of experience and viewer satisfaction.

For more information about Edge Intelligence or to book a call with System73, visit