-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trouble with prediction refinement #92
Comments
I will need additional information on this to be able help.
The performance that you are showing should not be able to give you results like the one you show in the last step unless the video is like this.
|
Hi Jens,
1.
For this example I've used a fully scored video in Boris (~15 minutes). I've tried using an additional video in the past and produced similar results.
2.
Apologies, I've been off the program for awhile. How do I determine the performance of specific classifiers?
3.
The image shown was predicting on new data from the same cohort. The video trained on had a larger grooming detection but still had the entirety of the video outside of 3 seconds (which were classified as other) classified as grooming or rearing.
Videos being predicted on are recorded from the same perspective as training. For pose estimation, we have used SLEAP and have had relatively strong accuracy for the outputs. The SLEAP model struggles whenever the animal stands on its hind legs. The BORIS videos are also have rearing scored from when the paws first leave the ground to when they return. Maybe this provides enough example frames of rearing where the animals paws look like they're planted on the ground? For reference, we record from the bottom-up.
Sincerely,
Aidan
…________________________________
From: Jens Tillmann ***@***.***>
Sent: 23 July 2024 20:25
To: YttriLab/A-SOID ***@***.***>
Cc: Aidan Price ***@***.***>; Author ***@***.***>
Subject: Re: [YttriLab/A-SOID] Trouble with prediction refinement (Issue #92)
I will need additional information on this to be able help.
1. How large is your training set?
2. What is your classifiers performance that you used for the shown prediction?
3. Are you predicting on new data or on parts of the training?
The performance that you are showing should not be able to give you results like the one you show in the last step unless the video is like this.
1. is this a video with different camera perspective?
2. Did you verify the integrity of the pose estimation?
—
Reply to this email directly, view it on GitHub<#92 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BHMSN5EJH6HWURPIVG6OMTTZNYVQBAVCNFSM6AAAAABK5M7CBSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBUHAZDSOJXGM>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hi Aidan, thanks for your patience. From the details you have given me, I cannot tell what went wrong on your side sofar. When running the active learning step and setting the parameters gives you information about the amount of labels that go into the first iteration of learning. Can you send me that? Also, when creating the project, there is an info which tells you how many labels are present in the data. Can you send me that as well? As a sanity checkWhat happens if you predict on the video used for training? The model should be really close to the original annotations. If that is not the case, we will need to dig deeper! Concerning refinementRefinement takes a while, as you are manually feeding in additional data piece by piece, while the active learning tab takes that data in seconds. If you have additional videos fully annotated, it might be more beneficial to add them to the initial project creation in your case. |
Closing because it went stale. Feel free to reopen if new informatin arises. |
I've been trying to get a model working for bottom-up recording and tracking of a mouse, with intentions of being able to automatically record grooming and rearing behaviours. However, on output A-SOiD predicts that 99.9% of videos are rearing. During refinement, examples of rearing are quite large as most the video is considered rearing, and after long attempts at refining the behaviour the outputs are all still rearing. Any suggestions on how to stop the model from biasing rearing? I've attached configurations/outputs.
The text was updated successfully, but these errors were encountered: