Skip to content

joiy123/v8-grape

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

The data presented in this study are openly available in the digital repository Zenodo: ① Grapevine Bunch Detection Dataset: https://doi.org/10.5281/zenodo.7717055. ② Grapevine Bunch Condition Detection Dataset: https://doi.org/10.5281/zenodo.7717014. [1]Ultralytics. (2023). Available at: https://github.com/ultralytics/ultralytics [2]Guo, C., Dai, J., Szemenyei, M., and Yi, Y. (2023). Channel Attention Separable Convolution Network for Skin Lesion Segmentation. arXiv preprint arXiv:2309.01072. doi: 10.48550/arXiv.2309.01072 [3]Li, H., Li, J., Wei, H., Liu, Z., Zhan, Z., and Ren, Q. (2022). Slim-neck by GSConv: A better design paradigm of detector architectures for autonomous vehicles. arXiv preprint arXiv:2206.02424. doi: 10.48550/arXiv.2206.02424. [4]Liu, W., Lu, H.,Fu, H., and Cao. Z. (2023). Learning to Upsample by Learning to Sample. arXiv preprint arXiv:2308.15085. doi: 10.48550/arXiv.2308.15085 [5]Siliang, M., and Yong, X. (2023). MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression. arXiv preprint arXiv:2307.07662. doi: 10.48550/arXiv.2307.07662. [6]Hu, J., Shen, L., Albanie, S., Sun, G., and Wu, E. (2018). Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 42(8), 2011-2023. doi: 10.1109/tpami.2019.2913372 [7]Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., and Hu, Q. (2020). ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 11531-11539. doi: 10.1109/cvpr42600.2020.01155. [8]Liu, Y., Shao, Z., Teng, Y., and Hoffmann, N. (2021). NAM: Normalization-based Attention Module. arXiv preprint arXiv:2111.12419. doi: 10.48550/arXiv.2111.12419. [9]Ouyang, D., He, S., Zhang, G., Luo, M., Guo, H., Zhan, J., et al. (2023). Efficient Multi-Scale Attention Module with Cross-Spatial Learning. arXiv preprint arXiv:2305.13563. doi: 10.48550/arXiv.2305.13563. [10]Wang, J., Chen, K., Xu, R., Liu, Z., Loy, C. C., Lin, D. (2019). CARAFE: Content-Aware ReAssembly of FEatures. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp. 3007-3016. doi: 10.1109/iccv.2019.00310. [11]Tong, Z., Chen, Y., Xu, Z., and Yu, R. (2023). Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism. arXiv preprint arXiv:2301.10051. doi: 10.48550/arXiv.2301.10051. [12]Zhang, Y.-F., Ren, W., Zhang, Z., Jia, Z., Wang, L., and Tan, T. (2022). Focal and Efficient IOU Loss for Accurate Bounding Box Regression. Neurocomputing 506, 146–157. doi: 10.1016/j.neucom.2022.07.042. [13]Ultralytics. (2020). Available at: https://github.com/ultralytics/yolov5 [14]Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., et al. (2022). YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv preprint arXiv:2209.02976. doi: 10.48550/arXiv.2209.02976. [15]Wang, C., He, W., Nie, Y., Guo, J., Liu, C., Han, K., et al. (2023). Gold-YOLO: Efficient Object Detector via Gather-and-Distribute Mechanism. arXiv preprint arXiv:2309.11331. doi: 10.48550/arXiv.2309.11331. [16]Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696. doi: 10.48550/arXiv.2207.02696. [17]Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J. (2021). YOLOX: Exceeding YOLO Series in 2021. arXiv preprint arXiv:2107.08430. doi: 10.48550/arXiv.2107.08430. [18]Xu, S., Wang, X., Lv, W., Chang, Q., Cui, C., Deng, K., et al. (2022). PP-YOLOE: An evolved version of YOLO. arXiv preprint arXiv:2203.16250. doi: 10.48550/arXiv.2203.16250. [19]Xu, X., Jiang, Y., Chen, W., Huang, Y., Zhang, Y., and Sun, X. (2022). DAMO-YOLO : A Report on Real-Time Object Detection Design. arXiv preprint arXiv:2211.15444. doi: 10.48550/arXiv.2211.15444.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published