to start playing? OK, let’s give three 1-second
duration chunks. Then we should expect 3 sec-
onds latency, correct?”
Not exactly, says Erstein, noting that Unre-
al Streaming Technologies took that approach
with HLS 4 years ago.
“Yes, you can achieve 3 seconds latency with
HLS via Unreal Media Server, but it’s not sta-
ble,” says Erstein. “The latency can grow. [The]
iOS player decides to buffer sometimes.”
On a roll, Erstein ran the extrapolated tech-
nical use case out to its logical conclusion. “Let’s
make very small chunks, say, 100 ms,” he says.
“So consider a DASH server sending chunks of
100 ms length. However, DASH and HLS fetch
every chunk with a separate HTTP request, so
you would need to send an HTTP GET request
for every 100 ms chunk. In other words, you
would have 10 HTTP requests per second!”
As Erstein points out, just creating these
HTTP requests would take another 30–100 ms.
With so many HTTP requests, this approach
would essentially flood the network and there-
by increase lag time—not because of segment
length or a required number of segments, but
because an ever-growing HTTP request time
means the server has to work much harder
handling these HTTP requests.
“This is why the recommended chunk dura-
tion for HLS or DASH is 8–10 seconds,” says Er-
stein, “so that the player doesn’t need to issue
an HTTP request more frequently than once in
8–10 seconds. So there you are, inherently 8–10
seconds behind the real time.”
So what is the solution?
Erstein’s approach is to aggregate HTTP requests in groups rather than as a single HTTP
request for every chunk.
The cleanest way to do this, and a way that’s
supported by Wowza and other streaming media engines, is to use a persistent socket connection between the player and the server. It’s
an HTTP request that persists for an extended
Not unlike the session initiation protocols
(SIP) that are used by VoIP phones or the session approach used by legacy streaming protocols, this single HTTP connection approach,
using a persistent socket connection protocol
called WebSockets, still delivers HTTP content.
From the packaging and segmentation stand-
point, nothing changes, but HTTP gets out of
its own way and allows these smaller seg-
ments—whether they’re the 100 ms chunks
that Unreal Streaming Technologies espouses,
or larger chunks in the 1–2 second ranges—to
flow continuously from the server to the player
over that single WebSocket connection.
“The player issues a connect request only
once in the beginning. Now the connection is es-
tablished, and the server starts sending chunks,”
says Erstein. “The player doesn’t need to connect
to the server anymore.”
Wowza’s Knowlton explains the appeal of
the WebSocket approach. “RTMP usage for de-
livery is likely to decline at an accelerating pace
over the next 5 years,” he says. “WebRTC is well-
positioned to pick up market share as support
for it increases throughout the ecosystem. The
Wowza Streaming Engine customer adoption
of our scalable WebRTC functionality has been
very high, rivaling the rapid adoption of HLS
when we first introduced that in 2009.”
In addition to traditional media streaming
protocols, Wowza Streaming Engine also has
built-in WebSocket and HTTP Provider capa-
bilities, so it’s quite possible to maintain the use
of traditional RTMP and long-segment-length
“classic” HLS while also experimenting with
these newer approaches to persistent HTTP con-
nectivity all within a single server environment.
On the network and content delivery front,
WebRTC is still under consideration by CDNs.
“While Akamai hasn’t made any formal an-
nouncements regarding WebRTC, we do see
it as important to addressing ultra-low laten-
cy requirements—those situations that require
sub-1-second latency,” says Akamai’s Michels.
“We feel the more traditional broadcasters and
OTT services will continue to leverage HLS and
DASH for delivery, with WebRTC being used to
address the more specialized use cases.”
In conclusion, it appears that in 2017 there
are two things we can guarantee: consistency
and change. WebRTC and WebSockets clearly are building blocks for the next generation
of streaming delivery, especially if low latencies are required, but both RTMP and “classic”
HLS will continue to be a factor for their specific use cases.
Tim Siglin is a streaming industry veteran and longtime contributing editor to Streaming Media magazine.
Comments? Email us at firstname.lastname@example.org, or check
the masthead for other ways to contact us.