-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How is the dataset divided? #1
Comments
Thanks for your interest! I don't think we have received the dataset requests from you. May I know which data files did you use? Also, could you please describe your experiment steps clearly? |
I'm sorry that the previous description was not clear. I just cloned the code and used the default parameters and DJI_0007~DJI_0014 in the code for training, but the results obtained are different from those in the paper. May I ask which part of the data was used for training to get the results in the paper? The dataset was applied by my classmates in the same lab, and I'm doing similar research with him, therefore using the same dataset. |
The default code is only using DJI_0006 for fast execution. You definitely need more training data. In the paper, we are using DJI_0007~0022 which results in roughly 57500 total data points. Of those, about 51750 were used for training, and the rest for test + validation. So you might want to change the code to be In terms of the dataset, please limit the circulation inside your lab. Please let others submit another request if they are interested. Thanks! |
Thank you very much for your help in your busy schedule! I have fully understood the process of your experiment through your detailed explanation. |
I'm sorry that the previous description was not clear.
I just cloned the code and used the default parameters and DJI_0007~DJI_0014 in the code for training, but the results obtained are different from those in the paper. May I ask which part of the data was used for training to get the results in the paper?
The dataset was applied by my classmates in the same lab, and I'm doing similar research with him, therefore using the same dataset.
The text was updated successfully, but these errors were encountered: