-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python+CV version #2
Comments
The initial version of the new code is done and is currently being used and improved as users provide feedback. As we are going to get help from Henry who knows Python quite well, we are continuing this issue to add needs and wants for us to keep development going. One major thing is that right now, different tasks/projects run in the same hardware, are coded as completely separate entities. To reflect this, we have added the tasks here as completely separate too. However, with code that is more organised, we could potentially have the same code base, which would be great for long term maintenance and avoiding mistakes that could arise from updating one part of the code and not another. Maze without auditory cuesHere are things that we would like to see done as soon as possible:code clean up
things that are not pressing.Desired implements:
maze with auditory cuesthings to do soon:
|
Currently setting up a simpler version of the maze using OpenCV and Python:. Here is a breakdown of the steps needed:
give users option to draw rois before starting the session (should be flexible to take as many or as little rois as needed, add guidance for drawing entrance rois as well as reward areas rois
write up the logic for each trial
have a file with the servo values for each trial, the reward location, etc
habituation: animals are free to explore the maze, no reward.
phase 1: - animals are free to explore, the gratings are in the correct position for each trial, if the animals reach the reward ROI, it gets a pellet.
phase 2: same as phase1 but animals need to use the grating information so that they go directly to the correct ROI, they get no reward if they visit the wrong place first.
phase 3: same as phase 2, but we are changing the reward probability in each location, so that attending the correct location will not always give a reward. Introduce more than one correct location per trial.
phase 4: same as 3, but with electrophysiology.
output:
The text was updated successfully, but these errors were encountered: