ENABLING A CLOUD-BASED APPROACH
TO CONTENT DELIVERY
As the transformation of content delivery continues, the role of
cloud technology has become ever more pertinent, particularly in
terms of upgrading the contribution and primary distribution parts
of the chain. We can see, for instance, the widespread adoption
of cloud technologies in the I T industry and the prevalence of
cloud providers in internet-delivered content—particularly for
organizations that do not own their own networks.
It could be argued the cloud does not seem an obvious application
for contribution networks, which connect multiple locations and
require functionality at each one of those locations. However, this
reaction is often based on considering the role of third-party cloud
providers, rather than cloud technology itself. In order to understand
whether cloud technology can be applicable, we need to look at
the problems cloud technology resolves and then whether those
problems and solutions address the needs of contribution net works.
The technology behind large-scale data centers was introduced
to address specific needs in particular: how to scale the number of
applications without making operations difficult, with the aim being
to maximize the amount of automation in the process. Applications
are written by a huge number of different organizations, each making
its own decision about hardware infrastructure, operating system
(OS) choice, configuration, installation, upgrades, backups, and
high availability to name a few factors. As a result, it is very difficult
to create a large-scale data center that supports this diversity.
Cloud providers that offer virtualized infrastructure (for example
allowing the user to choose a virtual machine of a particular size and
OS) do provide automation, but only really remove the complication
of the diversity of physical infrastructure and connectivity. In doing
so, the inconsistency of application requirements and behaviors
remain in place.
CLOUD-NATIVE TECHNOLOGY: ADDRESSING
THE NEED FOR OPERATIONAL EFFICIENCY
For any organization seeking to use multiple applications,
this diversity is an operational cost that brings no real benefits,
but instead increases the risk of errors and makes fault diagnosis
increasingly difficult. In order to resolve the diversity, there is a
need to be able to manage applications in the same manner, no
matter where the application was originally written. This is where
cloud-native containerized applications have a role to play. These
are applications that are written specifically for the automated
environment, using containers—such as docker containers—and
being managed by the same, open-source (usually) application
orchestrator (often Kubernetes).
By building these cloud-native environments, service providers
can take full advantage of the operational efficiencies and
scalability brought by cloud technology. Superficially, it would
appear that a wide-area network with processing capabilities
distributed at multiple locations is very different from a typical
transaction-based web application that is often deployed in
cloud providers. Indeed, there are many differences in the type
of applications and the need for data flow, which must be taken
into consideration. The automation, self-service, and consistency
attributes of cloud technology, though, can help address some of
the operational needs of a contribution net work.
Clearly, if processing is required at the edges of the network,
then the topology appears to be very different from a data center.
However, this topology difference need not cause an issue, as long as
network connectivity allows the processing nodes to work together
in a cluster. In doing so, the nodes can optionally host acceleration
such as FPGA or QSV, if that is relevant for the processing required.
This network component is perhaps the key asset of an organization
offering contribution networks, so it is reasonable to expect that on-premises—in fact on multiple premises—deployment is likely to be
the most common model.
Putting this together, we can see it is actually possible to build a
distributed cloud, comprising processing nodes that are located in
many different sites, connected by the net work that is the key asset
of the contribution net work service provider. It is then possible to
use the same, open, cloud technology that is used behind some of
the world’s biggest cloud providers within that distributed cloud,
thus creating much higher levels of automation and reducing
INTO 2018: EXPERIENCES MATTER
Consumer experience matters, and introducing the innovative
solutions that will future-proof TV and media businesses will be a
focus in 2018. While maintaining quality will be a given, the focus
must be not only delivering more video but also accommodating how
and when consumers want to watch it. As Ericsson ConsumerLab
TV and Media research indicates, by 2020 only one in 10 consumers
will be stuck watching TV only on a traditional screen. Indeed, in
the same time frame, we estimate half of all TV and video viewing
will be on mobile—an 85 percent increase since 2010.
To keep up with the consumer demand for high-quality viewing
experiences across all devices, the industry needs greater flexibility,
low latency, and to drive operational efficiencies in a very competitive
marketplace. Innovations in cloud and virtualization will support
the delivery of the best viewing experiences without excessive costs,
helping these businesses to transform, differentiate, and compete in
2018 and beyond.
Ericsson is an award-winning, global leader in TV and media products
and services, with a proven track record in delivering TV and media
business transformation for over 25 years. Working with customers
around the world, Ericsson offer an extensive portfolio of products
and services through its Media Solutions business that span media
processing, delivery and consumer experience.
For more information about Ericsson in TV and media, please visit