Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TPAC Breakout: Video metadata for moving objects & sensors on the web #1194

Open
rjksmith opened this issue Sep 28, 2020 · 9 comments
Open
Assignees
Labels
webvmt Web Video Map Tracks Format issues

Comments

@rjksmith
Copy link
Member

rjksmith commented Sep 28, 2020

Background

Emerging markets in 'mobile video devices' such as dashcams, drones, body-worn video and smartphones are increasing consumer demand for geotagged video on the web and especially with access to moving objects, e.g. distance & speed for vehicles, and sensor data, e.g. heart rate for fitness users.

Web browser integration of timed video metadata enables users to easily access and share their data with the online community and makes geotagged videos accessible to web search engines in a common format. OGC Testbed-16 Full Motion Video to Moving Features task has highlighted benefits of exporting embedded (in-band) metadata to a separate, linked (out-of-band) file for web access using Web Video Map Tracks (WebVMT), including the need for a web API for moving objects & sensors.

Discussion

Key concepts include:

  1. Data sync to synchronise timed metadata, including location, with video on the web;
  2. Interpolation to represent intermediate values between sample points for sensors and locations;
  3. Moving objects to determine distance, speed, etc. from a timed location sequence (trajectory);
  4. 3D coordinates to (optionally) include height/altitude/elevation, though 2D should be the initial focus.

The aim is to draft a lightweight solution, so consideration should be given to:

  1. Relevance with identified common use cases;
  2. Flexibility to accommodate most use cases;
  3. Simplicity to minimise design complexity & processing overheads;
  4. Alignment with existing standards.

Participants are urged to join and contribute to the WebVMT Community Group (CG) where GitHub issues form focal points for the key issues to enable ideas to be shared and discussed before the TPAC meeting.

Objectives

The goal is to draft a justified list of properties for inclusion in a web API which is designed for accessing moving objects & sensors in a browser and suitable for 'mobile video devices.'

Participation

All participants should register for TPAC 2020.

Please add relevant comments to the following issues which will be used to capture feedback by Friday 23 October 2020:

These issues will form the basis of discussion and the conclusions agreed at the breakout meeting in the week 26-30 October 2020.

@rjksmith rjksmith added the webvmt Web Video Map Tracks Format issues label Sep 28, 2020
@rjksmith rjksmith added this to To do in Web Video Map Tracks (WebVMT) Format via automation Sep 28, 2020
@rjksmith rjksmith self-assigned this Sep 28, 2020
@rjksmith
Copy link
Member Author

rjksmith commented Oct 1, 2020

Breakout session proposed for TPAC 2020 during the breakout week, 26-30 October.

@rjksmith
Copy link
Member Author

Thanks to all who have provided feedback so far.

The breakout discussion has been scheduled for 14:00-15:00 UTC on Monday 26 October. I'll collate the input to date and summarise this in a brief presentation which will be followed by discussion of the proposed web API for moving objects and sensor data.

I look forward to your contributions on Monday.

@rjksmith
Copy link
Member Author

rjksmith commented Oct 26, 2020

Breakout Session

  • Links to the Zoom call for the breakout session are on the TPAC web page which includes a calendar invite. The IRC channel is also linked from the same web page.

  • Presentation slides are now available for download.

@rjksmith
Copy link
Member Author

Many thanks to all those who participated in the online breakout session on Monday 26 October 2020 and contributed to the discussion. I've collated the feedback with the session goals and minutes.

Conclusions

  1. Proposed attributes for a moving object
    A moving object is defined as an identified object associated with a sequence of timed locations. The following calculations are sufficiently lightweight and would impose a minimal processing overhead on the web browser.

    1. Location - interpolated from the sequence of timed locations;
    2. Distance - calculated from the cumulative linear change in location over time;
    3. Heading - calculated from the angular change in location over time;
    4. Speed - calculated from the change in distance over time;
    5. Description - description of the moving object, e.g. a front-facing dashcam.
  2. Proposed attributes for a sensor
    A sensor is defined as an identified object associated with a sequence of timed observations. The following attributes are sufficiently small and would impose a minimal storage overhead on the web browser.

    1. Value - interpolated from the sequence of timed observations;
    2. Description - description of the sensor, e.g. heart rate monitor;
    3. Units - units of measurement, e.g. m/s, rpm;
    4. Type - variable type, i.e. string, number, boolean, object, array;
    5. Range - numeric range (optional), e.g. [-90, 90].
  3. Other topics discussed
    The following issues were raised for further discussion:

    1. Indoor mapping use case - associating video with an indoor space, rather than geospatial co-ordinates;
    2. Location privacy issues - privacy & security implications of sharing video-location data online;
    3. Video metadata search - benefits of using out-of-band metadata with online search engines for privacy, security & bandwidth - see Area Monitoring use case for more details;
    4. Browser API integation - how video metadata can be integrated with web browsers & HTML DataCue - see WICG DataCue activity for more details;
    5. Weather data use case - associating video with weather data - see Weather Data in Schema issue for more details;
    6. Multiple moving objects - how more than one moving object can be tracked concurrently with WebVMT - see TV Sports Coverage use case for more details;
    7. Image stabilisation use case - frame timing accuracy required to sync sensor data for image stabilisation.

@chris-little
Copy link
Contributor

@rjksmith Your attributes 1.i and 1.ii combined are called a GeoPose, and there is quite a lot of work now in standardising its representation, including a sequence, or even tree, of geoposes. . They are also trying to ensure that the implied calculations in any sequence of such poses can be effectviely processed by modern GPU based graphical systems.
HTH, Chris

@rjksmith
Copy link
Member Author

Thanks @chris-little. You're right, though I think you mean 1.i and 1.iii which could map directly to OGC GeoPose.

Camera geopose is included in #1137.

@rjksmith
Copy link
Member Author

@chris-little Having thought further about this, I'm not sure it's true as we've assumed that moving object orientation and heading are identical. Counterexamples include hovercraft and rally cars where geopose and heading may not match as they tend to slide sideways.

Perhaps our assumption should be the default position, though we acknowledge that there are other possibilties which should be addressed.

Thanks for highlighting this.

@chris-little
Copy link
Contributor

@rjksmith Also applies to aircraft and drones - one of your primary use cases!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
webvmt Web Video Map Tracks Format issues
Projects
Development

No branches or pull requests

3 participants
@chris-little @rjksmith and others