Lightweight sandy vegetation object detection algorithm based on attention mechanism

Published:21 November 2022
Abstract Views: 1104
PDF: 285
HTML: 22
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Authors

This paper proposes a lightweight sandy vegetation object detection algorithm based on attention mechanism to solve the object detection task in the harsh sandy environment. We reduce the number of model parameters by the lightweight design of the anchor-free object detection algorithm model, thereby reducing the model inference time and memory cost. Specifically, the algorithm uses a lightweight backbone network to extract features and linear interpolation in the neck network to achieve multi-scale. Model algorithm compression is performed by depthwise separable convolution in the head network. At the same time, the channel attention mechanism is added to the model to optimise the algorithm further. Experiments have proved the superiority of the algorithm, the mAP in the training effect is 76%, and the prediction time per frame is 0.0277 seconds. It realises the efficiency and accuracy of the algorithm operation in the desert environment.

Dimensions

Altmetric

PlumX Metrics

Downloads

Download data is not yet available.

Citations

Crossref
Scopus
Google Scholar
Europe PMC
Birrell S., Hughes J., Cai J.Y., Iida F. 2020. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 37:225-45.
Bochkovskiy A., Wang C.Y., Liao H.Y.M. 2020. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
Duan K., Bai S., Xie L., Qi H., Huang Q., Tian Q. 2019. Centernet: Keypoint triplets for object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, 6569-78.
Dyrmann M., Karstoft H., Midtiby H.S. 2016. Plant species classification using deep convolutional neural network. Biosyst. Eng. 151:72-80.
Girshick R. 2015. Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. 1440-8.
Howard A.G., Zhu M., Chen B., Kalenichenko D., Wang W., Weyand T., Andreetto M., Adam H. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
Hu J., Shen L., Sun G. 2018. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 7132-41.
Iandola F.N., Han S., Moskewicz M.W., Ashraf K., Dally W.J., Keutzer K. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360.
Kestur R., Meduri A., Narasipura O. 2019. Mangonet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard. Eng. Appl. Artific. Intell. 77:59-69.
Law H. Deng J. 2018. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European conference on computer vision (ECCV), 734-50.
Li X., Wang W., Wu L., Chen S., Hu X., Li J., Tang J., Yang J. 2020. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Adv. Neur. Informat. Proc. Syst. 33:21002-12.
Liu S., Qi L., Qin H., Shi J., Jia J. 2018. Path aggregation network for instance segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 8759-68.
Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.Y., Berg A.C. 2016. Ssd: Single shot multibox detector. In: European conference on computer vision. Springer. 21-37.
Ma N., Zhang X., Zheng H.T., Sun J. 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European conference on computer vision (ECCV), 116-31.
Michael C., Charles H., James R., Stefan S., Graham V.M., 2018. World atlas of desertification.
RangiLyu 2021. Nanodet: Super fast and high accuracy lightweight anchor-free object detection model. Available from: https://github.com/RangiLyu/nanodet.
Redmon J., Farhadi A. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
Ren S., He K., Girshick R., Sun J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neur. Informat. Proc. Syst. 28.
Tian Y., Yang G., Wang Z., Wang H., Li E., Liang Z. 2019a. Apple detection during different growth stages in orchards using the improved yolo-v3 model. Comput. Electron. Agric. 157:417-26.
Tian Z., Shen C., Chen H., He T. 2019b. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627-36.
Williams H.A., Jones M.H., Nejati M., Seabright M. J., Bell J., Penhall N.D., Barnett J.J., Duke M.D., Scarfe A.J., Ahn H.S., 2019. Robotic kiwifruit harvesting using machine vision, convolutional neural networks, and robotic arms. Biosyst. Eng. 181:140-56.
Yu J., Sharpe S.M., Schumann A.W., Boyd N.S. 2019. Deep learning for image-based weed detection in turfgrass. Eur. J. Agron, 104:78-84.
Zhang H., Yu H., Yan Y., Wang R. 2022. Gated domain-invariant feature disentanglement for domain generalizable object detection. arXiv:2203.11432v1
Zhang X., Zhou X., Lin M., Sun J. 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 6848-56.

How to Cite

Hua, Z. and Guan, M. (2022) “Lightweight sandy vegetation object detection algorithm based on attention mechanism”, Journal of Agricultural Engineering, 54(1). doi: 10.4081/jae.2022.1471.

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.