This guide describes how to configure Amazon Rekognition features in Accurate.Video for enriching assets with metadata.
There are a number of prerequisites required for Amazon Rekognition to work.
First of all, make sure that Amazon Rekognition is available in the region where you are running Accurate.Video. Refer to the AWS link below for availability in different regions.
AWS Regional Product Services
Another requirement is that the Accurate.Video runner service needs to run in the same region as the S3 bucket where assets are located. For example, to enrich assets from an S3 bucket in Ireland (eu-west-1), the Amazon Rekognition job has to be started from this region as well, meaning that the job runner service needs to run in Ireland.
To control the region of the runner there are two options:
The runner needs permissions to read the S3 bucket and to start Amazon Rekognition jobs. This means that the IAM role associated with the runner must have permissions AmazonRekognitionFullAccess
and AmazonS3ReadOnlyAccess
. More details on how to configure IAM roles for Amazon Rekognition can be found in the AWS documentation.
Note that when running on ECS from our templates, these roles are automatically configured from start.
If the runner is not using the same AWS account as the S3 bucket connected, there is one additional step that needs to be configured. The S3 bucket policy needs to include the ARN of the role that the runner is using. An example S3 bucket policy is shown below, with the Accurate.Video job runner.
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::010652268016:user/av-rekognition-test",
"arn:aws:iam::381397495928:role/test-av-JobsTaskStack-2ANMIWZ-ECSTaskRole-UUAXH1CX2S04"
]
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::av-rekognition-test",
"arn:aws:s3:::av-rekognition-test/*"
]
}
]
}
Users who wants to start an analysis job from Accurate.Video requires the roles analysis_write
or super_user
. The user also
needs write
access to the asset to be analyzed.
The result from an analysis job will be stored on the asset as timespan metadata
. Depending on the job that was run, these
timespans will either be of type REKOGNITION_SHOT
, REKOGNITION_TECHNICAL_CUE
, REKOGNITION_CONTENT_MODERATION
,
REKOGNITION_LABEL_DETECTION
, REKOGNITION_CELEBRITY_DETECTION
or
REKOGNITION_TEXT_DETECTION
and will have metadata fields:
source
- will always be set to REKOGNITION
for rekognition jobsconfidence
- a number [0, 100]type
- the detected type e.g. Shot
, BlackFrames
, Graphics
, Violence
or WORD
.To view the new markers in the UI you will need some additional settings in the frontend settings, see example:
markers: {
groups: [
...
{
match: marker => marker?.metadata.get("source") === "REKOGNITION",
title: "Rekognition",
id: "Rekognition",
readOnly: true,
rows: [
{
match: () => true, // Default
track: ({metadata}) => metadata.get("name"),
title: ({metadata}) => metadata.get("name"),
tooltip: ({metadata}) =>
`${metadata.get("name")} ${metadata.get(
"description"
)}% confidence`
}
],
markerStyle: _ => ({backgroundColor: "#ff9900"})
},
...
],
}
,
markersMetadataSettings: [
....
{
match: type => type.startsWith("REKOGNITION_"),
mappings
:
{
name: "type",
description
:
"confidence",
trackId
:
"trackId"
}
}
,
...