Latency from encoding the video stream depends heavily
on the selected approach, as latency contribution can range
significantly—from under 1/10 of a second (100 milliseconds)
to more than 10 seconds.
Using broadcast-quality encoders provides very high-quality techniques such as multi-pass compression, but
introduces a significant amount of latency.
Software-based encoding can provide flexible, high-quality, non-latency-critical streaming with adaptive bitrate
capabilities. Performance will vary and is dependent on the
content, the streaming set-ups and bitrate cascades, as well
as on CPU/GPU configurations.
Enterprise-class hardware encoders can provide a
cost-effective, flexible, high-performance, and highly
deterministic option suitable for many low-latency
applications and encode in high-quality formats such as
H.264 and HEVC that facilitate internet streaming.
Once the video stream is transported to the target
destination, it needs to be decoded for the display device.
There are a number of decoder options that contribute
latencies that range from under 20 milliseconds to several
hundreds of milliseconds or longer, including the following:
• Software players
• Set-top boxes
• Dedicated hardware decoders
Software players have cost and accessibility advantages.
They are often free and allow viewing live video on a range
of screens. The tradeoff is increased and inconsistent decode
latency and reliance on CPU/computer configurations.
Set-top boxes (STBs) provide simple user interfaces and
are available at very low cost. Because STBs are based on
commonly available low-cost ASIC technology and cater
to a wide variety of formats, they have higher latency than
dedicated hardware decoders.
Dedicated hardware decoders provide the lowest possible
latency, and are designed for use in high-performance,
When transporting video across the internet, there
are a number of factors that cause latency. Designers can
opt for dedicated networks (MPLS) which allow for the
provisioning of the necessary bandwidth to ensure quality
of service (QoS).
However, if cost is a factor, transporting across the
public internet is preferred. Using the public internet,
however, can be challenging and you’ll need to plan for
• Packet Loss: In order to send video data across the
internet, the encoder sends the compressed video
within very small IP packets that are sent across
the network. When faced with congestion, the
network may have to drop packets as the video
passes through. If packets are lost, the video quality
will be compromised as the decoder does not have
all of the information with which to rebuild the
• Jitter: This occurs when packets are delivered with
inconsistent timing, or even out of order. Jitter can
have an exponential effect on latency and can become
a significant problem if not accounted for.
• Intermediate Net work Devices: Devices including
streaming servers that replicate and repackage video
streams can add latency to video transmission.
• Physical Distance: Since the speed of light is a limiting
factor in all optical networks, the physical distance
between end points must be factored into planning.
Longer distance means more latency.
• IT Firewalls: Bridging between networks or streaming
protocols may add latency, so the impact of firewalls
needs to be considered.
Although designing the ideal solution for transporting
high-quality, live video over the internet can be challenging,
using today’s protocols and technologies, it is possible.