HLS caching, the server side management of the video chunks
and segments that the HLS playlist links to, is generally used to
reduce I/O load on the origin media server or, when stored at the
edge, to increase the bandwidth efficiency of a content delivery
net work. With proper architecture, HLS caching can add many
desirable features for the viewer.
Indeed, HLS is an ephemeral protocol … it was designed
to stream live events into a local cache of temporary files, via
an ever-changing playlist to point at a current cache file at its
destination, or push from a stateless http server to a client that
forgets everything as soon as the cache is expired or full.
But how are live streams used? With sporting events, we
expect instant and easy-to-use instant replays. With films, to a
public that grew up on VCRs and DVD players, everyone expects
fast-forward and rewind. It is also very important to have an
HTML5 slider control that lets the viewer drag back or forward
on our browser clients.
So given that viewers are probably going to “rewind” to see
the play or scene again any way, why not just anticipate it on the
The DVEO ATLAS Traffic Saver server ingests live HLS streams,
or just plain .ts streams, converting as necessary to what we
call an “HLS pull service.” Once cached on the Traffic Saver, the
managed HLS can be output directly to the viewers or to a CDN.
The Traffic Saver then provides buffering control for these pull
services, allowing the server operator to specify various options
such as a delay in seconds (or 0, of course), and even specify
separate delays by playback stream type (HLS, DASH, etc.).
The buffering management allows the server operator to serve
With HLS and DASH
a delayed live stream just by its URL.
By specifying such URL parameters in your playback links
(samples of which are provided for each of your pull services in
A New Approach to Live Caching