Job templates are designed to be a way for customers to modify our common workflows such as ingest, render etc without having to modify our code. It can also be used to abstract away the full job specification from users, and allow a user to queue a very simple job description which is then expanded into a full job using the template system.
In order to use a template, we create a job containing an action with type TEMPLATE, and a metadata field with key template_id
referring to the template to use. An example of such a job:
{
"type": "JOBTYPE",
"actions": [
{
"type": "TEMPLATE",
"metadata": [
{
"key": "template_id",
"value": "mytemplate"
},
{
"key": "some_parameter",
"value": "some_value"
}
]
}
]
}
When this job is run, the template action will attempt to find the template with the given id based on the following rules:
.j2.jsonav.runner.template_location is set, check in that directory on discWaiting until all queued jobs are finished.We have in the system a set of standard workflows that use the template system:
If a customer wish to modify these workflows there are a few ways in which to do so:
ingest.j2.json, and add it to the
template_location directory. This will override the default template that is available on the classpath.custom_ingest.j2.json, and then use template direction to specify that
ingest should instead run custom_ingest. This is done using configuration on the runner with the following format:
av.runner.template.ingest=custom_ingest. There can be any number of these configuration mappings. There is one default such template redirection configured on the runner: template jobs with id render will instead use the template
with id render/elemental.
Templates are written in the Jinja2 templating language, and must produce an array of JobDto structures in
Json format. A context is fed into the template engine, which contains the following:
Template job action, any actions before this in the job and from the job itself can be
accessed using metadata.field. Note that this will only contain the first closest value for the given field.metadataList.files (as a list of FileDto), will be available in the template given the following:files is available in the job, this will be assumed to be a list of FileInputDto,
which will be transformed to a list of FileDto and made available to the templatefile_id in the job, this will result in those files being fetched from the adapter, and
then be available to the template in files.assets (as a list of AssetDto), will be available in the template given that there were metadata fields
asset_id in the job.uuid field that is unique for each run.In addition to this there is one additional tag available:
storage_id_by_tag X, which will look up the best available writable storage id that is either tagged with the given tag X, or if
no such could be found, it will return the first available storage idAn example of what a template might look like:
[
{
"type": "BACKUP",
"actions": [
{
"type": "WAIT_FOR_JOB",
"metadata": [
{
"key": "wait_for_job_id",
"value": "{{metadata.wait_for_job_id}}"
}
]
},
{% for a in assets %}
{
"type": "ASSET_METADATA_UPDATE",
"metadata": [
{
"key": "asset_id",
"value": "{{a.id}}"
},
{
"key": "metadata:some_metadata_field",
"value": "some_value"
}
]
},
{% endfor %}
{% for f in files %}
{
"type": "COPY_FILE",
"metadata": [
{
"key": "source_file_id",
"value": "{{f.id}}"
},
{
"key": "target_path",
"value": "{{uuid}/{{f.fileName}}"
},
{
"key": "target_storage_id",
"value": "{% storage_id_by_tag offline %}"
}
]
},
{% endfor %}
]
}
]
Templates can be added (or overridden in the default template cases), by posting them to the settings API in the adapter.
The settings type should be av_job_template, the template code should be added in the blob field on the setting, and a metadata
field with key value should contain the template name/id. Note that the value field should be the id of the template as used in the job, not the filename of the template file, i.e. ingest, not ingest.j2.json.
Example:
POST /api/settings
{
"type": "av_job_template",
"blob": "{\"type\": \"TEST\"}",
"metadata": [
{
"key": "value",
"value": "my_template"
}
]
}
The runner contains a set of default job templates used to perform common tasks, including ingest, poster creation etc.
Perform standard ingest with support for video, audio, subtitles, marker import, baton files etc.
If the job fails, the given target asset will be deleted. Based on the file type of each given file, a template partial file will be
included with actions for the file, see partials/file_ingest for more information.
Parameters:
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Asset to ingest given files to |
Create a new poster based on given parameters.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Asset to attach poster to, optional. |
| source_path | String | Url to source video to extract poster from, see POSTER action for more information |
| image_format | String | Image format of extracted poster, see POSTER action for more information |
| destination_resolution | String | Resolution of extracted poster, see POSTER action for more information |
| region_source_in | String | Region source in of extracted poster, see POSTER action for more information |
| region_source_out | String | Region source out of extracted poster, see POSTER action for more information |
| frame | Integer | Frame of extracted poster, see POSTER action for more information |
| target_storage_id | String | Id of target storage to upload poster to |
| target_path | String | Path on target storage to upload poster to |
| file_metadata:X | String (Multiple) | File metadata, see IMPORT_FILE action for more information |
Results:
Run video analysis based on given parameters.
| Parameter | Type | Description |
|---|---|---|
| analysis_type | String | The type of analysis to run, for valid values see below |
| asset_id | String | Asset to analyze |
Supported analysis types:
partials/rekognition_analysis for more informationRun AWS Rekognition on the given asset. See partials/rekognition_analysis for more information
| Parameter | Type | Description |
|---|---|---|
| asset_id | String | Asset to analyze |
Do timespan export of the given type for an asset. Will create a file, and upload it to a storage.
| Parameter | Type | Description |
|---|---|---|
| timespan_type | String | The type of timespans to export, if subtitle a subtitle export will be done, else a marker export, |
see partials/subtitle_export and partials/marker_export for more information |
Export the given file(s) to an external location.
| Parameter | Type | Description |
|---|---|---|
| export_url | String | Base location to export files to |
| append_filename_to_url | Boolean | If set, the filename of the file in the database will be appended to the export URL |
Render the given timeline using AWS Elemental, and optionally ingest the resulting file to a given asset.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Optional asset to ingest result to |
| target_storage_id | String | Storage to put result file on |
| target_path | String | Target path on storage to put result file on |
| preset_id | String | AWS Elemental preset id to use for main output |
| files | FileInputDto[] | Outputs to create, see ELEMENTAL_RENDER action for more information |
| timeline | TimespanDto[] | Inline timeline to render, see GET_TIMELINE action for more information. |
| timeline_asset_id | String | Asset to retrieve timeline from, see GET_TIMELINE action for more information. |
| expand_timeline_from_video_track | Boolean | See GET_TIMELINE action for more information |
Render the given timeline using FFMPEG, and optionally ingest the resulting file to a given asset.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Optional asset to ingest result to |
| target_storage_id | String | Storage to put result file on |
| target_path | String | Target path on storage to put result file on |
| preset_id | String | FFMPEG preset id to use for main output |
| files | FileInputDto[] | Outputs to create, see FFMPEG_RENDER action action for more information |
| timeline | TimespanDto[] | Inline timeline to render, see GET_TIMELINE action for more information. |
| timeline_asset_id | String | Asset to retrieve timeline from, see GET_TIMELINE action for more information. |
| expand_timeline_from_video_track | Boolean | See GET_TIMELINE action for more information |
Render the given timeline using a provided transcoder, and optionally ingest the resulting file to a given asset.
| Parameter | Type | Description |
|---|---|---|
| transcoder | String | If elemental see render/elemental, else see render/ffmpeg. |
Partial job templates abstract away common tasks, such as ingesting a video file, running AWS Rekognition on an asset etc. These can
be included in other templates by doing {% include partials/template.j2.json %}.
Adds ASSET_CLEANUP action, which will delete the given asset if the job fails. Should be added as the first part of the job if used.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | The asset to delete on failure |
Based on analyzed or provided file type, add file specific actions:
partials/video_ingest for more information, if skip_transcode is not given, will also include BUILD_TRANSCODE_JOB
action to verify that the video is good enough to use in a browser, see action for more information.partials/subtitle_ingest for more informationpartials/baton_ingest for more informationpartials/marker_ingest for more informationpartials/manifest_ingest for more informationWill perform the following actions on the given video file:
POSTER action for more
information.SPRITE_MAP action for more information. Will run action with interval_count=50 and interval_seconds
=10.partials/waveform for more information.run_rekognition is given, run AWS Rekognition Shot detection, but allow the rekognition job to fail without failing the
ingest job. See partials/rekognition_allow_fail for more information. Import a subtitle file as timespans on the given asset.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | The asset to import subtitles to |
Metadata on the timespans can be controlled by adding metadata fields to the file in the template job. The following metadata values are used:
| Metadata key | Type | Description |
|---|---|---|
| identifier | String | Set SUBTITLE_IMPORT parameter identifier. If this is not set, identifier will be the filename of the subtitle file |
| subtitle_type | String | Set SUBTITLE_IMPORT parameter subtitle_type on the timespan |
| language | String | Set SUBTITLE_IMPORT parameter language on the timespan |
Import a baton file as timespans on the given asset.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | The asset to import baton markers to |
Import a marker file as timespans on the given asset.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | The asset to import baton markers to |
Parameters to the MARKER_IMPORT action can be controlled by adding metadata fields to the file in the template job. See the
action documentation for more information on parameters.
Import a manifest file to the given asset. See the MANIFEST_INGEST action documentation for more information on manifest ingest.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | The asset to import the manifest to |
| manifest_format | String | Set as MANIFEST_INGEST parameter manifest_format |
| manifest_template_id | String | Set as MANIFEST_INGEST parameter ingest_template_id |
Export subtitle timespans on an asset to a subtitle file of the given format on a given storage. See SUBTITLE_EXPORT action
documentation for more information.
| Parameter | Type | Description |
|---|---|---|
| asset_id | String | The asset with subtitle timespans to export |
| identifier | String | Only export timespans with the given identifier |
| language | String | Only export timespans with the given language |
| subtitle_type | String | Only export timespans with the given subtitle type |
| export_suffix | String | File suffix for the target file |
| destination_format | String | The destination subtitle format |
| target_storage_id | String | ID of storage to upload result file to, if not given, a storage with tag marker_export will be used |
| target_path | String | Target filename on the storage to upload result file to |
Export a set of timespans from the given asset to a result file on a given storage, in a given file format. See MARKER_EXPORT action documentation for more information.
| Parameter | Type | Description |
|---|---|---|
| asset_id | String | The asset with subtitle timespans to export |
| timespan_type | String | Only export timespans with the given timespan type |
| csv_fields | String | Set MARKER_EXPORT parameter csv_fields |
| csv_separator | String | Set MARKER_EXPORT parameter csv_separator |
| frame_rate_numerator | String | Set MARKER_EXPORT parameter frame_rate_numerator |
| frame_rate_denominator | String | Set MARKER_EXPORT parameter frame_rate_denominator |
| dropframe | String | Set MARKER_EXPORT parameter dropframe |
| filter:X | String | Set MARKER_EXPORT parameter filter:X |
| export_suffix | String | File suffix for the target file |
| destination_format | String | The destination subtitle format |
| target_storage_id | String | ID of storage to upload result file to, if not given, a storage with tag marker_export will be used |
| target_path | String | Target filename on the storage to upload result file to |
Export a file to an external location.
| Parameter | Type | Description |
|---|---|---|
| export_url | String | Base location to export files to |
| append_filename_to_url | Boolean | If set, the filename of the file in the database will be appended to the export URL |
Create a new poster, and upload it to a target storage. See POSTER action documentation for more information.
| Parameter | Type | Description |
|---|---|---|
| source_path | String | URL to source video file |
| image_format | String | Poster image file format |
| destination_resolution | String | Set POSTER action parameter destination_resolution |
| region_source_in | String | Set POSTER action parameter region_source_in |
| region_source_out | String | Set POSTER action parameter region_source_out |
| frame | String | Set POSTER action parameter frame |
| target_path | String | Target filename on given storage to upload poster to |
| target_storage_id | String | Upload the poster to the given storage |
| file_metadata:X | String | Set metadata field X on the resulting file to the given value |
| target_asset_id | String | Optionally attach the poster to the given asset |
Extract audio waveforms and upload the resulting data files to storage. Will create three waveform data filed per audio stream in the
given file. The storage used will be the first storage found with tag waveform.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Asset to attach all resulting files to |
Run AWS Rekognition on the given asset, and update metadata and timespans on the asset based on job progress and result.
| Parameter | Type | Description |
|---|---|---|
| asset_id | String | Asset to analyze |
| file_id | String | File to analyze |
| file_location_id | String | File location to use for analysis |
| rekognition_type | String[] | Set of AWS Rekognition analysis actions to run, valid values: shot, technical_cue, content_moderation, label_detection, celebrity_detection, face_detection, text_detection |
| shot_confidence | Integer | Confidence parameter for shot detection |
| technical_cue_confidence | Integer | Confidence parameter for technical cue detection |
| content_moderation_confidence | Integer | Confidence parameter for content moderation detection |
| label_detection_confidence | Integer | Confidence parameter for label detection |
| celebrity_detection_confidence | Integer | Confidence parameter for celebrity detection |
| face_detection_confidence | Integer | Confidence parameter for face detection |
Queue a new AWS Rekognition job for shot detection that is allowed to fail without failing the parent job. This is commonly run during ingest.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Asset to analyze |
Retrieve and expand timeline information to prepare for render actions.
Expanding the timeline will add more metadata to each timespan:
The goal is to create a self contained timeline that contain all information needed for all render processes, without the need for the render to talk to the backend.
| Parameter | Type | Description |
|---|---|---|
| timeline | TimespanDto[] | Explicit timeline given |
| timeline_asset_id | String | Retrieve all timeline timespans from the given asset |
| implicit_timeline_asset_id | String | Create an implicit timelime with a single segment for the whole given asset |
| implicit_timeline_file_id | String | Create an implicit timelime with a single segment for the whole given file |
| expand_timeline_from_video_track | String | Will expand the timeline and add audio segments for all video segments on the input timeline |
Attach a target_file_id file (commonly setup during the render processes) to the given asset, and optionally run render/partials
/video_ingest if the file type is VIDEO and skip_image_extraction is not set in the file metadata.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Asset to attach file to |
| skip_image_extraction | String (File Metadata) | If set, skip including render/partials/video_ingest |
Do common video ingest tasks on files created during render processes, such as sprite map extraction, poster creation etc.
| Parameter | Type | Description |
|---|---|---|
| target_asset_id | String | Asset to attach files to |