Skip to content
Time in IIIF Presentation API
Branch: gh-pages
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.


Type Name Latest commit message Commit time
Failed to load latest commit information.


(contains sound, requires mp4 support - for demo only!)

This is an experiment with time in the IIIF Presentation API, by allowing a temporal dimension (e.g., t=5,20) as well as the existing spatial dimensions in the fragment part of an annotation target:

"target": ",60,500,100&t=5,20"

The canvas itself also has a duration:

"duration": 120


Adding time to Shared Canvas is necessary for AV in IIIF. This allows media to be positioned in space and time. Reconstructing The Magnificent Ambersons with annotation of studio stills, screenplay, scores etc is an AV use case analagous to existing manuscript use cases for images. Conveying the audio sequencing and the packaging of The White Album is important for Sound collections.

Time in the Presentation API allows storytelling. If your collection is full of images, audio, video and text, you can create IIIF collections and manifests that tell stories through annotation, and pull in fragments of video from anywhere over HTTP. This demo is simple, to give context to discussions about the Presentation API and IIIF AV APIs, but the same model could be used to create things like and almost any timed presentation or exhibit.

It would be great if, for example, the multimedia works of Charles and Ray Eames were modelled with the IIIF Presentation API and available for annotation, from relatively simple three projector slide shows to the complexity of Think from the 1964 New York World's Fair.

This demo only supports integer seconds. Other time fragment syntax is required as specified in

Time is required for IIIF AV, but a bitstream API equivalent to the Image API is not required for simple presentation of AV material, just as the Presentation API does not require images to have Image API services. The simplest AV example would be a single video annotation filling the whole canvas for its entire duration, just as a single image annotates a whole canvas in the simplest image use case.


The use of setTimeout to show and hide the annotations is probably naive. A IIIF client should offer pause, play, and scrub over a canvas clock (a global clock separate from the play time of the individual media). Video, audio and text annotations need to be synchonised, and stay synchronised in the face of variable CPU load.

A real client can use libraries such as and/or techniques such as to do this more reliably.

Client side

The Presentation API model can describe complex manipulation that modern browsers are quite capable of doing client-side. See for more information.

You can’t perform that action at this time.