WP40 STREAMING MEDIA SPOTLIGHT SERIES APRIL/MAY 2018 SPONSORED CONTENT
As with all codecs, video publishers can only deploy
HEVC to clients that have an HEVC decoder. Unlike H.264,
which plays almost any where, there are major gaps in HEVC
playback support, particularly for computer-based playback
via browsers like Firefox or Chrome. For this reason, HEVC
has primarily been used to deliver video to Smart TVs and
similar OT T and STB devices and for 4K or UHD content.
However, in June 2017, Apple announced that HLS would
support HEVC playback in MacOS, tv OS, and iOS versions
that shipped later in 2017, opening up hundreds of millions
of compatible endpoints utilizing mobile connections that
can benefit from a more efficient codec. Though not all Apple
devices can play HEVC, you can maintain compatibility
with legacy devices by continuing to provide H.264-encoded
alternatives with properly constructed manifest files that
enable these devices to locate and retrieve these streams.
With our discussion of HEVC complete, let’s transition
over to CMAF.
CHALLENGES OF EXISTING
No matter which codec you use to compress a video, that
video needs a format—or container—to deliver it to the viewer.
You’ll also need to choose a streaming delivery protocol for
delivery. We’ll discuss the latest advancements in protocols in
the Streaming Protocols section below, but here we’ll cover the
relationship between container format and protocol.
Briefly, a container format is data in the file header that
describes how video and associated metadata is stored within
a file. You probably know that a file with a . MOV extension is
a Quick Time file; technically this means that it’s stored in the
Quick Time container format. Though the container format
dictates file compatibility and playability, the compressed
video and metadata comprises the vast bulk of the entire file.
The container format really is dictated by just a few bits of
data in the file header. In practice, this means that it’s easy
to convert from one container format to another; you don’t
modify the compressed video or metadata in any way, you
just change the bits in the header.
In contrast, streaming delivery protocols are designed to
deliver video between a server and a player. These protocols
specify and use a container format, but also contain other
elements like manifest files, as you’ll learn more about in the
Before CMAF was announced, the various streaming
protocols used two different container formats. Apple’s
HTTP Live Streaming (HLS) used the MPEG transport stream
container format (MPEG-TS or .ts), which is the same format
used for decades in the cable and IPTV industries. When HLS
originated, each stream was divided into separate files called
segments, each with a .ts extension, which could number in
the thousands for even a short movie or show, complicating
file delivery and reducing the effectiveness of caching. Later,
HLS was updated to use single .ts files, with segments of that
file retrieved via byte-range requests that define discreet
chunks within the longer file.
All other H TTP-based protocols, including Dynamic
Adaptive Steaming over HT TP (DASH), used the newer and
more flexible fragmented MP4 container format (f MP4 or
.mp4). Though you can produce separate f MP4 files for
each segment, the default mode of operation for DASH is a
single file with segments retrieved via byte-range request,
simplifying file delivery and improving the ability for the
files to be cached.
Because HLS used the MPEG2 Transport Stream
container, while DASH and other HT TP technologies used
Fragmented MP4 files, if a video publisher wanted to reach
all devices it had to package and deliver two versions of
each video—one in HLS, and one in DASH.
And that’s how things stood until Apple and Microsoft
came together in 2016 and announced a common format for
DASH and HLS. We’ll pick this back up in the next section
after describing Streaming Protocols.