Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trouble with prediction refinement #92

Closed
aidanpriceUON opened this issue Jul 15, 2024 · 4 comments
Closed

Trouble with prediction refinement #92

aidanpriceUON opened this issue Jul 15, 2024 · 4 comments
Assignees
Labels
question Further information is requested stale Issues without response for a long time

Comments

@aidanpriceUON
Copy link

I've been trying to get a model working for bottom-up recording and tracking of a mouse, with intentions of being able to automatically record grooming and rearing behaviours. However, on output A-SOiD predicts that 99.9% of videos are rearing. During refinement, examples of rearing are quite large as most the video is considered rearing, and after long attempts at refining the behaviour the outputs are all still rearing. Any suggestions on how to stop the model from biasing rearing? I've attached configurations/outputs.
1
2
3
4

@JensBlack JensBlack self-assigned this Jul 23, 2024
@JensBlack JensBlack added the question Further information is requested label Jul 23, 2024
@JensBlack
Copy link
Collaborator

I will need additional information on this to be able help.

  1. How large is your training set?
  2. What is your classifiers performance that you used for the shown prediction?
  3. Are you predicting on new data or on parts of the training?

The performance that you are showing should not be able to give you results like the one you show in the last step unless the video is like this.

  1. is this a video with different camera perspective?
  2. Did you verify the integrity of the pose estimation?

@aidanpriceUON
Copy link
Author

aidanpriceUON commented Jul 25, 2024 via email

@JensBlack
Copy link
Collaborator

Hi Aidan,

thanks for your patience. From the details you have given me, I cannot tell what went wrong on your side sofar.

When running the active learning step and setting the parameters gives you information about the amount of labels that go into the first iteration of learning. Can you send me that?

Also, when creating the project, there is an info which tells you how many labels are present in the data. Can you send me that as well?

As a sanity check

What happens if you predict on the video used for training? The model should be really close to the original annotations. If that is not the case, we will need to dig deeper!

Concerning refinement

Refinement takes a while, as you are manually feeding in additional data piece by piece, while the active learning tab takes that data in seconds. If you have additional videos fully annotated, it might be more beneficial to add them to the initial project creation in your case.

@JensBlack
Copy link
Collaborator

Closing because it went stale. Feel free to reopen if new informatin arises.

@JensBlack JensBlack added the stale Issues without response for a long time label Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested stale Issues without response for a long time
Projects
None yet
Development

No branches or pull requests

2 participants