Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python+CV version #2

Open
2 tasks done
amchagas opened this issue Oct 23, 2023 · 1 comment
Open
2 tasks done

Python+CV version #2

amchagas opened this issue Oct 23, 2023 · 1 comment
Assignees

Comments

@amchagas
Copy link
Contributor

amchagas commented Oct 23, 2023

Currently setting up a simpler version of the maze using OpenCV and Python:. Here is a breakdown of the steps needed:

  • give users option to draw rois before starting the session (should be flexible to take as many or as little rois as needed, add guidance for drawing entrance rois as well as reward areas rois

  • write up the logic for each trial

    • detect entrance on the maze to start a trial
      • count the time the mouse spent in the maze
      • number of times it entered the maze
    • count the amount of time the animal spent on each defined ROI
  • have a file with the servo values for each trial, the reward location, etc

habituation: animals are free to explore the maze, no reward.
phase 1: - animals are free to explore, the gratings are in the correct position for each trial, if the animals reach the reward ROI, it gets a pellet.
phase 2: same as phase1 but animals need to use the grating information so that they go directly to the correct ROI, they get no reward if they visit the wrong place first.
phase 3: same as phase 2, but we are changing the reward probability in each location, so that attending the correct location will not always give a reward. Introduce more than one correct location per trial.
phase 4: same as 3, but with electrophysiology.

output:

  • raw video needs to be saved
  • file/table with data coming from session needs to be saved (amount of time per trial, which animal, which training phase it was, amount of time per session, amount of time per roi, xy position of animal, etc).
@amchagas
Copy link
Contributor Author

The initial version of the new code is done and is currently being used and improved as users provide feedback.

As we are going to get help from Henry who knows Python quite well, we are continuing this issue to add needs and wants for us to keep development going.

One major thing is that right now, different tasks/projects run in the same hardware, are coded as completely separate entities. To reflect this, we have added the tasks here as completely separate too. However, with code that is more organised, we could potentially have the same code base, which would be great for long term maintenance and avoiding mistakes that could arise from updating one part of the code and not another.


Maze without auditory cues

Here are things that we would like to see done as soon as possible:

code clean up

  • review code and clean it up a bit

    • right now we have a "support function" file where all functions are being stored and a main script where the actual maze control happens. It works but it is getting quite long and disorganized. It would be good to refactor this for modularity and organization.
  • Right now there is a mix of camelcase and snakecase variable names, during refactor and reorganisation it would be good to stick to the Python recommended way (snakecase if I am not mistaken?)

  • right now the code can be used to both record new data and to analyse old videos. We need to disentangle these things because this is leading to errors when analysis of old videos are done. Creating a separate script solely for analysis of videos might be a way to do it, specially if we have things refactored and organised (as per one of the first items in the list), or simply having flags/automated detection of what is the use case (new rec or analysis or existing videos) would also be a possibility.

things that are not pressing.

Desired implements:

  • for every trial register the order in which each reward area has been visited, add it to the dataframe with trial results. This will enable experimenters to calculate visitation counts, as well as see if the animals are improving gradually over time (if the number of wrong of visited reward areas decreases over trials for instance).

  • an information panel, showing information about the current trial (where the animal is (in or out of the maze,), what is the rewarded area), the number of total hits, the total number of trials, the total number of misses, etc. Initially this could simply be another opencv window with text written on it

  • Right now, the tracking is done by transforming the frame into a binary image with a specific threshold. It would be nive to give users the capability of changing the threshold with a slinding bar or something.

  • also related to the point above, another way of detecting where the animal is might be useful, or just adding to the current one. It would be nice to get the centroid of the animal and its trajectory on every trial in x and y coordinates (there is already some code in place from Bel that is starting to do that).

maze with auditory cues

things to do soon:

  • right now, once the session gets started, there are four ROIs and each gets mapped to a specific sound. We would like to have it so that this mapping changes after a user specified number of trials. (see "create_trials" function on supfun file for inspiration on how to approach this).

  • investigate on why in some trials the sound being played "stutters" when the animal is in the cued area. This might be related to the threshold detection, and if the animal is in a position where detection is happening close to the limit of the threshold, which could turn the sound cue on/off repeatedly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants