One of these is outlined in a January 2018
article published by the IEEE called “Gaze-
Aware Streaming Solutions for the Next Gen-
eration of Mobile VR Experiences,” which ar-
gues for weighting next-generation stream
delivery based on anticipated gaze. If video
optimization to date has focused on one to five
frames of referential detail to enhance vid-
eo clarity while keeping latencies low, based
around the human visual system, the idea
around gaze-aware encoding is to move be-
yond content- or context-aware encoding and
toward anticipatory viewing based on neuro-
science. The article’s authors, several of whom
work for the KTH Royal Institute of Technol-
ogy in Stockholm, plus an author who works
for Ericsson, premise the new solution as one
that “aims to deliver high visual quality, in real
time, around the users’ fixations points while
lowering the quality everywhere else.”
In other words, the context around where
the potential viewer might look is as important
as the primary encoding itself. This next-gen-
eration delivery option could have significance
in both quality enhancement and bandwidth
reduction for VR-video delivery. Headset sales
for VR and 360° video are beginning to rise,
with the authors citing a 69% increase in head-
set sales in Q1 2017 compared to Q1 2016.
How High Is Too High?
The question of accurately building out
streaming delivery solutions to meet an anticipated number of viewers at both peak and nonpeak times has plagued the streaming industry
from the earliest days. It was such a significant
issue, both technically and financially, that it led
to the rise of initial content delivery networks
(CDNs) as a way to minimize the buildout costs
for popular content publishers, while at the
same time providing peak capacity functionality if a particular on-demand video asset “went
viral” or became very popular very quickly.
The need for CDN solutions hasn’t changed,
but the technology around delivery has dramatically changed.
It’s possible, for instance, to forego a CDN
if content is being delivered from standard
HTTP servers. Today’s primary delivery method uses fragmented MP4 files (segmented dynamically by virtually “splitting” a whole MP4
file into thousands or tens of thousands of 2- to
10-second small files, often referred to as seg-
ments) through the use of byte-range address-
ing. These segments are packaged up in de facto
standards such as Apple HTTP Live Streaming
(HLS) or the MPEG standard Dynamic Adaptive
Streaming via HTTP (MPEG-DASH) and then de-
livered in a semi-sequential order.
Given the need to deliver several of these
segments prior to playback starting, the industry has knowingly boxed itself into a corner of choosing scale over lower-latency delivery. However, since it is now possible to
achieve scale with plain-vanilla HTTP servers, many companies have chosen to forego
external CDN services, replacing them with
their own HTTP server farms—but a virtual
one in most cases, which can be accomplished
by spinning up multiple virtual machine instances on the likes of Amazon Web Services,
Google Cloud, Rackspace, or even the Microsoft Azure platform.
But what if a content publisher wants to
deliver a lower-latency experience to its app
viewers, mimicking a television-like “channel
change” approach? In this case, there’s still a
raison d’être for CDN solutions.
How Low Is Too Low?
Flipping the script to address delivery issues
from the latency angle, it seems every encod-
er, media server provider, and CDN provider
is each hyping its low-latency credentials.
As we noted above, this is partly the fault of
HTTP-based streaming, but not entirely.
Given the consistent growth of OTT on-demand media assets and the parallel rise of
live-linear OT T delivery—both occurring against
a backdrop of more media consumption, but declining traditional OTA and cable live-linear delivery—there’s a desperate need for low-latency
delivery that can scale. But is there a way to
make OTT “channels” behave more like traditional TV channels?
New approaches to content delivery need
to address the mounting problems in delivering unicast streams, while at the same time
keeping in mind that multicast—the initial approach to delivering streaming at scale when
we all thought a 500-channel galaxy had way
too many media choices—is probably not going to be the panacea needed in our current
What options do we have for lowering overall latency?