Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About mixtrain in DynaBOA code #37

Closed
dqj5182 opened this issue Dec 30, 2022 · 6 comments
Closed

About mixtrain in DynaBOA code #37

dqj5182 opened this issue Dec 30, 2022 · 6 comments

Comments

@dqj5182
Copy link

dqj5182 commented Dec 30, 2022

Good morning,

Just have a question regarding mixtrain argument in DynaBOA code (including lower_level_mixtrain and upper_level_mixtrain).
I have confused with what exactly is mixtrain. According to the code, it seems that it allows the model to do label loss on h3.6m batch extracted from retrieval. However, the paper does not seem to mention mixtrain but mention an ablation study on "Adapting to non-stationary streaming data with highly mixed 3DPW and 3DHP videos.". But here, I am not sure whether the mixtrain procedure really uses 3DHP videos in the code. Could you make this clear on how to understand the mixtrain in the DynaBOA code in relation with the T-PAMI paper?

@gsygsy96
Copy link
Contributor

Hi, sorry for the misunderstanding.
In the code, the flag of 'mix_train' means that both h36m data and test data (e.g., 3DPW) are used to adapt the model (lower level or upper level).
In the paper, we concatenate 3DPW and 3DHP data to form a new test dataset.

@dqj5182
Copy link
Author

dqj5182 commented Dec 31, 2022

Ahha I see. Then, as far as I understand, mixtrain is a very general concept of using H3.6M and 3DPW dataset.

Lastly, I just want to ask whether using both H3.6M and 3DHP is also implemented for the final DynaBOA code. May I get clarification for this part?

@gsygsy96
Copy link
Contributor

gsygsy96 commented Jan 2, 2023

Yes, for all test datasets, we use a subset of h3.6m as a retrieval database.

@Mirandl
Copy link

Mirandl commented Apr 18, 2023

Hi, can I make a question about your cluster files for retrieval from H3.6M?

I wonder how did you generate the h36m_random_sample_center_10_10.pt, and cluster_res_random_sample_center_10_10_potocol2.pt.
to what I guess, it should be to put all the h36m images into the encoder, and use that features to make clusters, without using the regressor, is that right?

I also wonder how did u match the features clusters with their images, could u show me some code about it?
the reason why I ask this is because I want to reuse this idea and retrain this cluster for other models and need more details, hope you can understand.
thank you!

@syguan96
Copy link
Owner

Hi, please give me an email address, so that I can send the code to you for reference. But I didn't clean it.

@Mirandl
Copy link

Mirandl commented Apr 18, 2023

Hi, please give me an email address, so that I can send the code to you for reference. But I didn't clean it.

hi, my address is miranda018@163.com, thank you!

@dqj5182 dqj5182 closed this as completed Apr 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants