o w for more than 600 production partners. The
system, built by Discovery’s in-house IT team,
allows production partners to “send finished
programs and all associated elements to cloud
object storage over any Internet connection,”
including automated “check-in of the assets to
A Signiant software client sits on the pro-
duction partner’s desktop and ties back to a
Signiant-managed infrastructure in the cloud,
allowing accelerated file transfers between the
producer and Discovery.
Now, back to those five key steps. The fourth is
to assign configuration. This is the point at which
custom functionality is added to each node. It’s
possible to do this during the second step above,
but some industry experts recommend laying
out all the nodes and transitions before assigning functionality in the configuration step. This
may help avoid getting caught in the weeds before laying out the big-picture sequences.
Fifth and finally, validating the workflow is key
to determining that the workflow will properly
automate. For instance, this “test run” approach
could uncover points at which a failure of a node
sequence requires human intervention, which
limits overall automation.
Failure at any point would trigger a warning
that the workflow is not valid. When those points
are identified, you’ll need to decide whether to
add a user task alert into the workflow, so that
a human is alerted to the need for attention. Alternatively, you’ll need to choose a process sequence that properly automates between the
steps that have been identified as errors in the
Also, beyond initial validation, it’s also important to understand the overall impact of a sequence failure, and whether it applies to a particular asset or for the entire group of assets
Viewing Your Flow
In the early days of file-based workflow automation, there wasn’t much of a way to visualize
progress in a workflow. Either the processed files
ended up in the proper place, after an indefinite
period of time, or the technician on call got paged
with an error alert.
These days, though, that’s all changed. From
dashboards to pipeline visualizations, today’s
workflow automation tools and services offer
several windows of insight into your bottlenecks.
Even when the content has been processed and
uploaded to a content delivery network (CDN),
there are dashboards to monitor stream consistency. For example, in the 2017 Streaming Media
Sourcebook, we discuss an over-the-top (OTT)
“heat map” dashboard in the Quality of Service
Buyers Guide ( go2sm.com/qos).
There are also monitoring tools for continual
streams. One of these, from a company called
StreamGuys, is a clever, cloud-based service
named IsMyStreamUp that allows users to configure alerts via a web-based dashboard.
“New customers simply create an account,
enter the URL, and configure their contacts for
receiving alerts,” a company posting notes. “Is-
MyStreamUp alerts the specified contacts via
email the moment it detects a monitored stream
or page has gone offline, and notifies them again
when the stream or page is back up.”
Another type of monitoring tool measure-
ment is the number of processing hours—or
machines, if you’re doing an in-house workflow
automation—required to process video assets.
Most home-grown workflow solutions had been
designed to measure performance against an
hour’s worth of video, but the trend toward short-
er or non-standard-length video assets means
that your next-generation workflow automation
should measure in minutes.
This is especially true if you’re planning to do
multiple data rates or resolutions to offer adaptive bitrate (ABR) delivery to a wide variety of devices, since it’s feasible to have eight to 10 different transcodes for a single video asset. Minutes
add up quickly, and you don’t want your solution
to bog down when it’s trying to handle all of the
possible permutations necessary for ABR or even
for the different flavors of HTTP-based streaming delivery (Apple’s HTTP Live Streaming (HLS)
or the MPEG Dynamic Adaptive Streaming via
Clouding Up Your Workflow
While a number of workflow automation tools
started out as on-premise products, typically for
desktop and eventually for workgroup-sized operations, a more recent advancement has seen
workflow options head to the cloud.
Telestream, which has long made the Vantage
on-premise platform (see Figure 2 on the next
page), now offers transcoding and workflow configurations in the cloud, thanks to a tight implementation with Amazon Web Services (AWS).