Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In a new open dataset,the performance of the tsm is poor.Which lead to this? #628

Closed
ZJU-lishuang opened this issue Feb 23, 2021 · 8 comments

Comments

@ZJU-lishuang
Copy link

I train the tsm on the fight dataset.
And the best top1 acc is 0.53.
When I use other small open dataset,the top1 acc is nearly to 1.0.
Is the fight dataset problem lead to the problem?

@ZJU-lishuang
Copy link
Author

config
tsm_r50_1x1x8_50e_customdataset_fight_rgb320.py
log
20210220_081407.log

Can you help me?

@innerlee
Copy link
Contributor

try i3d?

@innerlee
Copy link
Contributor

And use pre-trained model on k400

@ZJU-lishuang
Copy link
Author

Oh,I forget to use pre-trained model on k400.
I will have a try.

@ZJU-lishuang
Copy link
Author

@innerlee When I do as you said ,the top1 acc of last model is higher than past.
But the top1 acc value is smaller and smaller as the training continues.
i3d log with k400 pre-train
20210223_092126.log

tsm log with k400 pre-train
20210223_081901.log

Is the dataset problem leading to low top1 acc?

@innerlee
Copy link
Contributor

The dataset is very tiny.

@innerlee
Copy link
Contributor

  • Fix the first two stages of the backbone

@irvingzhang0512
Copy link
Contributor

It's overfitting. For small datasets, fewer epochs (i.e. 20) are enough.
You can also set interval=1 for both evaluation and checkpoint_config in your configs.

@open-mmlab open-mmlab locked and limited conversation to collaborators Feb 23, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

3 participants