Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auto-advance doesn't allow parts of canvases to advance #1632

Closed
azaroth42 opened this issue Jun 26, 2018 · 9 comments
Closed

auto-advance doesn't allow parts of canvases to advance #1632

azaroth42 opened this issue Jun 26, 2018 · 9 comments
Assignees
Labels
A/V normative presentation Ready-for-TRC Normative changes ready for TRC review

Comments

@azaroth42
Copy link
Member

A canvas might represent a long stretch of content, such as a tape recording of oral histories with several histories being present. Auto-advance should allow the particular history to be pieced together from the end of the segment of the canvas, rather than only taking effect at the end of a canvas.

Thus the point at which the play-head advances is determined by the encapsulating resource (e.g. Range) not the Canvas with content, contrary to the definition here.

@workergnome
Copy link

I worry that this (and repeat) are going to create no end of edge cases, because they're describing event-based behaviors, not interface configuration behaviors. Because of that, we suddenly need to worry about inheritance and precedence, and bubbling, and all the other event-handling stuff that every event definition system needs to worry about.

Could we not meet this same need by using canvas-on-canvas annotations, with a loop if repeat is needed?

@azaroth42
Copy link
Member Author

Along with #1612 about inheritance of behaviors, is @workergnome's suggestion (get rid of auto-advance and just use a single canvas with a longer duration) an easy way out of several thorny issues?

@tomcrane tomcrane added the A/V label Jul 23, 2018
@tomcrane
Copy link
Contributor

It would be an easy way out, but we would lose some of the power of the model to present the object.

As a sound archive, it is important that I can convey in the model the distinct physical aspects of the real world object, as well as making it easy to navigate and experience for the web user.

Views vs structures at work for audio - each tape side in this long recording is a canvas:

view-nav-large

But they are auto-advance canvases; they keep playing. We want that continuity of sound, but we also want to convey that there are a whole load of tape sides here.

I've added some notes about this issue to this document:

https://docs.google.com/document/d/1ad4m48FEVUBSuKf8dkwG0_M10kQnvM9g6qVShvMuO8s/edit#heading=h.3wwqry1ozgsw

The heading "Problems with auto-advance" is the target of that link - I didn't want to dump it all here.

This is UV specific - how the UV generates user experience from the model - but it shows why auto-advance is a very useful tool (and uncovers some unaddressed things too - UV isn't looking for or dealing with auto-advance on ranges at all). Adopting the canvas on canvas approach would mean only one canvas in manifest.items and therefore losing the two distinct presentations of the object that views vs structures gives us. You could pull the tape-side-canvases out of the single canvas, but how does the client know to do that? Other canvas on canvas annos might NOT want that behaviour.

Elsewhere the document describes how the UV already synthesises a single virtual canvas from a run of auto-advance canvases, and renders that single canvas when navigating using ranges - and only in that mode. This approach solves many usability issues for complex content.

cc @irv @edsilv

@tomcrane
Copy link
Contributor

Diagram of ranges with auto-advance:

ranges-auto-advance

@workergnome
Copy link

I think the disagreement here is what a canvas is. @tomcrane, you say in your doc "Canvases represent distinct views of the object." I would say that "Canvases are abstract 2D spaces for displaying content", and that time-based canvases are 3D spaces.

The discrepancy between these interpretations, I think, is if IIIF is truly presentational, or if we're also using to to model Real World Objects. We can create the desired behavior (via a 3D space containing a series of other 3D spaces), as an abstract presentation of the content, tailored for that specific view--it's just not a one-to-one match with our conception of the Real World Object.

@tomcrane
Copy link
Contributor

tomcrane commented Jul 29, 2018

@workergnome -

The discrepancy between these interpretations, I think, is if IIIF is truly presentational, or if we're also using to to model Real World Objects

I don't think this is a disagreement, I think it's two aspects of the same thing. One aspect is the model the spec gives us, the other is the application of that model by implementers of the spec, who are really keen on modelling their Real World Objects using the spec, to produce an experience of those objects for users, but not to enforce a specific user experience for that object in all contexts.

That experience is usually not some high fidelity reproduction of a material object. A IIIF client cannot be subjected to any sort of visual confirmation that it has produced the "right" result (like a CSS test). So the model is not so abstractly presentational or behavioural in the way, say, a model driving a game engine is. There is no correct user experience for IIIF, other than the implied "if you are going to implement this feature then you must respect its MUSTs" - which still doesn't prescribe a specific UI.

I agree with you that the Canvases the spec gives us are abstract 2D spaces for assembling content. Shared, and simple abstract spaces. The spec is therefore presentational.

But then that model, driven by use cases, existing practice and emerging requirements, is applied to the creation of digital surrogates for Real World Objects where those RWOs often comprise "a series of pages, surfaces, or extents of time"[1]. People want to do that a lot, and the spec is not so abstract that it doesn't go out of its way to make that as simple as possible and no simpler, for a stack of known scenarios. Even with a split between the spec (presentational; minimal examples for the purpose of syntax) and cookbook (lots of examples of how to apply the spec to RWOs, useful patterns, encouragement of common practice, by and for the benefit of the community), the language of the spec itself is still full of mentions of RWOs to convey what the spec is for.

The community is opinionated that it wants to use this spec to model RWOs, and sometimes born-digital dimensioned content... to repeat my user story:

As a sound archive, it is important that I can convey in the model the distinct physical aspects of the real world object, as well as making it easy to navigate and experience for the web user.

Maybe I should rephrase that:

As a sound archive, it is important that I am able to convey the distinct physical aspects of the real world object, as well as making it easy to navigate and experience for the web user.

We have the means to do this, through the (still unrelentingly abstract!) Canvas. A Manifest's Canvases are discrete, dimensioned extents. Community practice, encouraged by shared recipes, uses these discrete extents in particular ways for different kinds of commonly encountered content. And complex viewers like the UV and Mirador use the discrete extents for one kind of navigation/representation of the Manifest, and structures for another. My choice in how I spread my content over manifest.items is going to be reflected when users encounter that content in multiple IIIF environments.

Pages are the obvious discrete extents for books.

Tape sides are good candidates to map to discrete extents; we're saying something if manifest.items is 10 sides of tape as 10 canvases. 10 discrete extents of time.

It's not the spec saying you must do this; the spec, as we agree, is just providing these abstract discrete extents. If we want to produce the specific user experience of these 10 sides of tape played as a single extent of time, we can certainly do that in the way you describe, but we've then asserted just one extent in manifest.items (it only has one canvas). auto-advance allows us to convey that this object comprises 10 discrete extents, and expect that a client will reflect that in navigation/UI somehow, but that a client should still carry on from one extent to the next without pause.

That is a separate issue from the meaning of auto-advance on ranges... I'll add a comment about that!

[1] from the Introduction, which I think is correct in its stance.

@tomcrane
Copy link
Contributor

A note from the community call discussion about auto-advance behaviour and the picture 2 comments above.

  • If those ranges represented fragments of recordings that happen to be spread across different bits of tape, but are intended to be heard as a single audio extent, then we WOULD want the parts of the canvases to auto-advance, one to another, regardless of whether the Canvas had that behaviour (the canvas could easily NOT have that behaviour, because the end of that Canvas may be unrelated to the start of the next).
  • If the blue ranges represented dialogue spoken by person A and the orange ranges represented dialogue spoken by person B, we probably wouldn't want them to be run together. Again, this could be unrelated to any auto-advance behaviour on the canvas.

Does this mean that auto-advance on Canvases is independent of auto-advance on Ranges? They simply are behaviours that apply in different contexts; it depends on how the user initiated a particular interaction (played a range, played a canvas).

That introduces other problems of state, that don't apply for spatial dimensions. Not problems for the model once we've sorted out what auto-advance means and updated the definition(s). but problems for client implementations that run into the event-related issues @workergnome mentions.

This 2D spatial example may be useful for comparison:
https://tomcrane.github.io/iiif-collector/#objects/longer-article.json

This happens to gather the target extents of the range from their canvases and assemble them. But it could have highlighted the two page parts in their whole canvases, that would also be a legitimate rendering. It's an feature of the client (a very simple client in this case); choose (or write) a client to do what you want. Both use cases are accommodated without having to add new behaviours.

Here's the problem. From the point of view of description of content, time is just one more dimension. No different from adding a z dimension. We're just saying this content is here, in this space, at this time. Annotations work the same way for more dimensions. A static observation of that description just says where and when everything is. We can accommodate any complexity of ranges describing where that stuff is. As JSON, as data, as model, it's no problem at all. It's just stuff addressing dimensions.

But the user experience of content with a temporal dimension is fundamentally different - state is change by the passing of time (reaching the end of canvases, entering and leaving extents that ranges point at). A client has to react to elapsing time, not just user action. This is just the way the Universe works for us! This is the source of most of the complexity we have to deal with for these complex AV use cases. It's not a modelling issue - we can be clear about the assembly of content in space/time, about what's there. Our content doesn't have to do anything at particular locations or time, it just is there, invariant. IIIF as a model doesn't need to do anything else. Content is always just content.

Instead it's an issue for interpreting the model to create user experience, which is raising these nuggets of awkwardness. I don't think they are showstoppers though.

@azaroth42
Copy link
Member Author

Eds call -- Currently the spec has a bug which means that the auto-advancement cannot work from segments of a canvas, only at the end of a canvas. The implementation of this is complicated, but the specification needs to allow it.

@azaroth42 azaroth42 added the Ready-for-Eds Editorial changes ready for Editorial review label Oct 3, 2018
@azaroth42
Copy link
Member Author

Closed by #1681

@azaroth42 azaroth42 added Ready-for-TRC Normative changes ready for TRC review and removed Ready-for-Eds Editorial changes ready for Editorial review labels Feb 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A/V normative presentation Ready-for-TRC Normative changes ready for TRC review
Projects
None yet
Development

No branches or pull requests

4 participants