i
n
dus
t
r
y
u
p
da
te
s
Use Case: Enterprise Content Management
While a tennis tournament has some fairly specific activity, Axon (formally Taser) received a similar
request for creating clips from subject matter that
was more of a moving target. Axon provides public
safety technology and equipment, including cameras that many police forces use to record video of daily interactions with the public. Axon was selected to
be the official AI partner of the Los Angeles Police
Department. “The LAPD has accumulated roughly
33 years’ worth of video data in the past year alone,”
says Daniel Ladvocat Cintra, senior product manager at Axon AI Research. The department’s challenge
is how to find anything valuable within video footage
that is the equivalent of one single camera streaming
24/7 since 1985.
Axon obviously needed to create a post-production
tool to accelerate footage review. Axon staffers are
training its AI to understand what kind of incident was
recorded, so they can find the difference between a pursuit or a pedestrian stop. “ 20 minutes of footage might
only contain 30 seconds’ worth of information that’s
pertinent,” says Cintra. The company worked with the
LAPD to find which parts of an incident contain valuable information, like a specific object or activities, to
help build a 30-second clip of the most relevant content
within a 20-minute-long video.
Another issue police agencies all over have is upholding privacy laws when providing video footage to
a court case or a Freedom of Information Request. The
company heard from customers that it can take up to
8 hours to redact 1 hour of video footage, where an officer needs to go thru the content and blur personally
recognizable information.
Axon is using AI to help remove recognizable images
from video content automatically in the post-production
process. “Right now we blur skin (including tattoos) and
faces,” says Cintra. Axon is able to detect where objects
are on the screen and then have an officer OK the redaction. The content can then be released for use.
Axon’s police agencies upload their video content
to a secure cloud-based environment, Evidence.com,
which is run on the Microsoft Azure platform. To date,
14. 9 petabytes of data have been uploaded to Evidence.
com, with 11. 5 petabytes currently active.
Use Case: Asset Management
While several of the previous examples include custom development, the AI in Cantemo’s media asset
management product Iconik is available to any customer of the tool. The company’s hybrid cloud product stores high-resolution assets on customer’s infrastructure and works as an aggregation layer in the
cloud, providing a holistic view of all assets an organization owns. “The most common way we are using
AI today is to recognize assets. Many companies have
thousands and thousands of hours of content and traditionally they have been using a manual workflow to
tag each scene or each frame to describe what it is,”
says Parham Azimi, CEO and co-founder at Cantemo.
Cantemo has built an AI framework that integrates
with existing machine learning systems that can be
used to identify content.
Cantemo shares the proxy version of the video content to the machine learning system—the first framework the company integrated with is the Google
Start timecode: 00:00:12: 10
End timecode: 00:00:16: 15
Tags: Spacecraft (75%), Outer space
(89%), Space station (72%), Cat (20%)
The timecodes define when a sequence
starts and ends. The tags describe what is
shown in the sequence, along with a confidence level for how correct each tag is
likely to be. So the above says that the sequence is most probably a spacecraft in
outer space, and probably not a cat, even
though there might have been something
that is similar to a cat.
The confidence level can be used to trigger manual workflows where something
The Los Angeles Police Department collects a tremendous amount of video and is using AI to determine what
footage contains valuable data.