below a certain threshold requires manual input to
review, approve, or reject certain suggestions.
“The Google video intelligence system is heavily
trained on all of the assets that Google owns,” says
Azimi. “If you upload a piece of video with a sneaker,
Google may suggest this is a white sneaker.” Iconik can
then do a keyword search identifying all white sneakers within existing footage.
Use Case: Personalization
IRIS. TV is a video personalization and programming
platform; its core technology is used to serve contextually relevant content. “We work with publishers and
marketers to serve the right content to audiences in real
time. Our AI is specifically computational AI,” says Field
Garthwaite, co-founder and CEO, IRIS. TV. This means
the platform intelligence learns from specific data and
makes choices. IRIS. TV builds personalized streams for
customers, similar to what You Tube does to help publishers understand viewers’ likes and dislikes.
This AI system analyzes the content consumers pre-
fer and searches out similar content based on similar
interests, tone of story, popularity, and other business
rules a publisher may put in place. “We have this API
plugin in a video player on [the sites for] Sports Illustrat-
ed or Time magazine,” says Garthwaite. “You’re going
to see things like Skip buttons or thumbs up/thumbs
down or a little pop-up showing what’s next.”
IRIS. TV’s platform AI is used to display the right
editorial or to insert ads into contextually relevant
editorial. Its latest product improves stream yields,
where the technology is used to determine the best fre-
quency for serving ads or placing branded content into
the stream. A campaign IRIS. TV did for Bud Light is
driving 20% completion rates on a 100-second piece
of branded, in-stream content by targeting the right
audience with the right content.
Use Case: Quality of Service
“The internet is not really architected to deliver
high-quality video at scale, and therefore this trend
toward more video consumption is going to require a
major unique approach to really work well,” says Ed
Haslam, CMO of Conviva. Conviva has been installing sensors or SDKs to measure video quality of service; to date it has installed almost 3 billion sensors
deployed globally on behalf of its publisher base and
measured around 14 billion viewing hours of video
content in 2017. “As far as we know that’s the largest
installed base of sensors (outside of walled-garden
environments), especially for a multi-publisher solution provider,” says Haslam.
The company’s new product is Video AI Alerts, developed for technical operations teams to discover
any video viewing anomalies like rebuffering, slow
start times, and poor bitrates. Conviva’s AI is calculating not only what is normal, but also sensing anomalies and, more importantly, correlating those anomalies to potential causes. “It could be the asset or it
could be your whole CDN provider,” says Haslam.
HBO has thousands of assets, and so setting up alerts
to track every single one of those would be onerous.
In one case, HBO said it never would have found the
issue that was occurring with a specific asset. A provider can create sensitivities to trigger an alert when
anomalies are a specific percentage off the mean for
a predetermined percent of its audience. This can be
based on audience size, because HBO likely cares more
about the season finale of Game of Thrones than about
viewers watching old episodes of The Sopranos in the
middle of the night.
“[The Video AI Alert] really reduces the level of effort the operations team have to expend doing the diligence themselves,” says Haslam. “[This] gives them the
prescriptive diagnostic data out there and arms them
to go solve the problem. Beforehand they were doing
all that stuff that machine learning is doing by themselves, by doing multiple queries against the UI (user
interface). They would restrict and say ‘Only show me
CDN 1’s data.’ It could take them hours potentially
running all those queries, whereas machine learning
does it within seconds or fractions of seconds.”
Use Case: Ad Insertion
If emotions or content can be detected in video, then
ad insertion is one use case that brand marketers will
welcome with open arms. Video marketing platform
Innovid will be launching updates to its platform in Q1
2018 to understand contextual intelligence—for example, to see how one ad creative is working against millions of different You Tube videos. The end result is the
AI can be used in asset management to help identify video content and apply relevant
tags and timecode.