Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tagging events in videos #19

Open
1 task
sfmig opened this issue Nov 9, 2022 · 7 comments
Open
1 task

Tagging events in videos #19

sfmig opened this issue Nov 9, 2022 · 7 comments
Assignees
Labels
core feature Core functionality enhancement Optional feature

Comments

@sfmig
Copy link
Collaborator

sfmig commented Nov 9, 2022

We would like to be able to add event tags to the collected videos (to record for example instants of opening/closing of boxes)

A few options we discussed with @sannatitus :

  • Manual post-hoc tagging when reviewing the videos: probably the easiest to implement in the prototype, but undesirable in the long term since it involves more manual work.
  • Using a trigger during acquisition: seems more convenient in the long term. The experimenter can use a trigger to label an event while the video is being captured. This would involve dealing with hardware and likely use tools like Bonsai (already used by the team) or Autopilot.
  • Automatically or semi-automatically identifying the events in the video (based on the frame in which a box opens etc. )

Ideally the event info would be saved in a video file metadata or equivalent (see #12)

@sfmig sfmig added the enhancement Optional feature label Nov 9, 2022
@sfmig sfmig added the core feature Core functionality label Nov 17, 2022
@sfmig
Copy link
Collaborator Author

sfmig commented Mar 15, 2023

Last week we (@niksirbi and I) chatted with Sanna about requirements for event tagging and the easiest way to implement this.

Thinking about how this could be as general as possible (but still simple), I think these two points should be implemented:

  • allow for one tag to have multiple frames assigned to it,
  • in the exported dataframe, add an 'event' column, that would be an empty string if there is no tag defined for that frame, or a string with the tag name otherwise. I'd suggest for now we discard the idea of tagging all frames based on the previous and next events.

Ideally we would have a tab that looks similar to the ROIs workflow:

  1. the user loads a video, plays it and stops it when a frame needs tagging (this is the most tricky part we think, more on that below).
  2. the user selects a tag from a dropdown menu, and a table with the tag and frame numbers starts to gets populated. If there are frames assigned for this tag already, they would show up in the table.
  3. the user tags the frames using relevant buttons. We agreed a tag can be defined for a single frame or an interval. So maybe we implement buttons to tag the start and end of an event, but this can be set to equal for an 'instantaneous' event.
  4. repeat for all tags
  5. once all tags are defined, export them to the metadata file for that video

@sfmig
Copy link
Collaborator Author

sfmig commented Mar 15, 2023

Re loading and playing a video, some very preliminary thoughts from our chat:

  • playing an uploaded video in dash seems cumbersome (see this draft PR); can we use an external video player approach instead?
  • we think the following should be doable in opencv:
    • play an uploaded video at normal speed,
    • pause it when a certain key is pressed (something like this),
    • allow for frame by frame playback (using other keys),
    • tag the currently displayed frame (meaning, the current frame number is extracted and linked to the event tag)
    • this or this could be useful for this goal
  • can we use opencv in dash?

@sfmig
Copy link
Collaborator Author

sfmig commented Mar 15, 2023

Second thoughts:
for v0 maybe it is good enough if we make tags defined for one frame only...it would definitely simplify things

@niksirbi
Copy link
Member

niksirbi commented Apr 3, 2023

After talking to Sanna, she prefers having the option to tag an event via entering time (e.g. hh:mm:ss). We could convert that to frame_index based on the fps.

@sfmig
Copy link
Collaborator Author

sfmig commented Apr 3, 2023

Sounds good, but I would still keep the frame input as an option (and probably even the default one).

For projects at higher frame rates (e.g., 200fps, even 1000 fps are not unusual in animal biomechanics), rounding up to the nearest frame doesn't really help much (the behaviour can be fast).

What I think may be more useful is what we discussed before, of defining an event as a range of frames (maybe we need to rescue that one...opening an issue for now #75)

@niksirbi
Copy link
Member

niksirbi commented Apr 3, 2023

Yeah for now I will include both options, with ability to choose between them

  • frame index
  • timepoint (in seconds)

@sfmig
Copy link
Collaborator Author

sfmig commented May 18, 2023

Updated thoughts:

@niksirbi had a chat with a researcher that used an approach similar to what we were thinking:

  • use opencv to load the video, play, pause etc
  • register the frames when a specific keystroke happens; we interpret that frame as tagged
  • pass that data to dash / the YAML file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core feature Core functionality enhancement Optional feature
Projects
None yet
Development

No branches or pull requests

3 participants