-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem downloading the DADA-2000 dataset #1
Comments
Hi @JWFangit , any news on the issue? |
Hello, we are glad to get your attention. The ground truth of our data set has been changed.We're working on it, and will be uploaded again in the near future, and will also be uploaded to Google drive. Please be patient. Thank you.. |
First of all, thank you very much for the good dataset to detect traffic anomalies together with anomaly classification. Thank you |
@San-Di Any link provided for raw videos? |
@San-Di can you provide the link of the google drive? |
@KC900201 I were just asking the author to provide the raw videos or any link other than Baidu platform. I also don't have raw videos. ^^ |
@JWFangit any updates on this issue? |
Hi! I have processed videos and maps according to the author's script, and the process is still smooth. However, when I run main.py, I will find that I still need a folder called "semantic". Where are the images in this folder downloaded from, or what script is needed to generate them? |
Hi. The semantic images in our work are obtained by using Deeplab-v3 model. You can utilize it to generate the semantic image for each RGB frame. |
Thank you for your reply! DeepLabV3 has several implementations in different frameworks. Can you please provide the GitHub address used for processing data? And is the model trained on the 80-category Coco dataset? |
Hi. The deeplab-v3 is pre-trained on the Cityscapes dataset. |
I have found a highly-rated DeepLabv3 project on GitHub and used its pre-trained model for the Cityscape dataset. However, when generating masks for this project using Semantic, I found that the generated masks had poor results.Could you please provide the dataset link for the missing Semantic folder in this project, or provide the GitHub website for which DeepLabv3 project that you used and the pre-trained model that was used to generate Semantic masks?Thanks a lot! |
Hi Xintong,
We also find that the semantic images by deeplab-v3 are not good and with
many unclear semantic boundaries. Therefore, the performance gain is not
very large after fusing the semantic images. Certainly, semantic images are
useful for capturing the key information in driver attention prediction. I
think you can just make an attempt and check its role in your model.
Wish for a good result.
Regards.
Jianwu
Hou-XinTong ***@***.***> 于2023年5月18日周四 10:26写道:
… I have found a highly-rated DeepLabv3 project on GitHub and used its
pre-trained model for the Cityscape dataset. However, when generating masks
for this project using Semantic, I found that the generated masks had poor
results.Could you please provide the dataset link for the missing Semantic
folder in this project, or provide the GitHub website for which DeepLabv3
project that you used and the pre-trained model that was used to generate
Semantic masks?Thanks a lot!
—
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHKRIPOC4FRDBU3MXZ3M23XGWCGDANCNFSM4LFRKL6A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
success depends on persistence, persistence depends on love.
|
Any updates on providing the dataset on google drive? |
Hi, I wish to download the public traffic dataset from this GitHub repository. However, the dataset is stored at Baidu repository shown as below, and Baidu currently doesn't allow member registration outside of PR China. Is there any other anyway for me to download the dataset from other sources?
https://pan.baidu.com/s/1gt0zzd-ofeVeElSlTQbVmw#list/path=%2FDADA-2000%2FHalf%20of%20the%20data&parentPath=%2F
The text was updated successfully, but these errors were encountered: