Next Article in Journal
Modeling of Cotton Yield Estimation Based on Canopy Sun-Induced Chlorophyll Fluorescence
Previous Article in Journal
Growth and Nutritional Responses of Zucchini Squash to a Novel Consortium of Six Bacillus sp. Strains Used as a Biostimulant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review

by
Hao-Ran Qu
and
Wen-Hao Su
*
College of Engineering, China Agricultural University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(2), 363; https://doi.org/10.3390/agronomy14020363
Submission received: 27 December 2023 / Revised: 24 January 2024 / Accepted: 7 February 2024 / Published: 11 February 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Weeds and crops engage in a relentless battle for the same resources, leading to potential reductions in crop yields and increased agricultural costs. Traditional methods of weed control, such as heavy herbicide use, come with the drawback of promoting weed resistance and environmental pollution. As the demand for pollution-free and organic agricultural products rises, there is a pressing need for innovative solutions. The emergence of smart agricultural equipment, including intelligent robots, unmanned aerial vehicles and satellite technology, proves to be pivotal in addressing weed-related challenges. The effectiveness of smart agricultural equipment, however, hinges on accurate detection, a task influenced by various factors, like growth stages, environmental conditions and shading. To achieve precise crop identification, it is essential to employ suitable sensors and optimized algorithms. Deep learning plays a crucial role in enhancing weed recognition accuracy. This advancement enables targeted actions such as minimal pesticide spraying or precise laser excision of weeds, effectively reducing the overall cost of agricultural production. This paper provides a thorough overview of the application of deep learning for crop and weed recognition in smart agricultural equipment. Starting with an overview of intelligent agricultural tools, sensors and identification algorithms, the discussion delves into instructive examples, showcasing the technology’s prowess in distinguishing between weeds and crops. The narrative highlights recent breakthroughs in automated technologies for precision plant identification while acknowledging existing challenges and proposing prospects. By marrying cutting-edge technology with sustainable agricultural practices, the adoption of intelligent equipment presents a promising path toward efficient and eco-friendly weed management in modern agriculture.

1. Introduction

Weeds are a big threat in agriculture as they occur in all parts of the field and compete with crop plants for resources. The result of competition for resources is reduced crop yields. Yield losses depend on factors, such as weed species, population density and relative time of emergence and distribution, as well as on the soil type, soil moisture levels, pH and fertility [1,2]. For decades, researchers and farmers have struggled to control weeds to overcome the thorny challenges they pose. Weeds in the field compete with crops for water, nutrients and sunlight. If not controlled properly, weeds can adversely affect crop yield and quality. In addition, research has shown that there is a significant link between reduced crop yields and weed competition [2]. For example, the annual cost of weeds in Australia within grain production systems is USD 3.3 billion, comprising USD 2.6 billion in costs for weed control and USD 0.7 billion in lost yield [3].
In today’s agricultural sector, accurately identifying crops and weeds is crucial for improving agricultural productivity, reducing production costs and achieving sustainable agricultural development. The fast development of deep learning techniques for wide application in computer vision provides new opportunities for crop and weed recognition. The high automation and learning capabilities of deep learning models enable them to learn from large datasets and gradually improve their performance, bringing unprecedented breakthroughs to precision agriculture. Recently, the main methods of weed control in agricultural fields have included hand weeding, mechanical weeding, laser weeding and chemical weeding. Chemical weeding provides the advantage of low cost, and it is unaffected by terrain factors. It is widely used all over the world [4]. The heavy use of herbicides increases weed resistance and increases the cost of agricultural inputs. Reducing the use of herbicides is also a critical step towards sustainable agriculture. Site-specific weed control can save up to 90% of herbicide expenditures. In addition, annual sales of pesticides worldwide amount to about USD 100 billion. If this idea can become reality, it will significantly reduce agricultural expenditure [5]. Spraying pesticides over large areas can also pollute the environment. For example, indiscriminate broadcast spraying throughout tobacco fields, especially during the early growth phase, can lead to unnecessarily spraying bare soil off target between any two contiguous tobacco plants, causing environmental pollution and pesticide seepage into the ground [6,7]. Pesticide use also has an impact on human health. The WHO has estimated that 1 million adverse reactions have been reported when hand-sprayed insecticides are used in crop fields [8]. In order to better control the use of herbicides, due to the massive increase in over-reliance on herbicides and herbicide-resistant weeds, the EU’s agricultural system has become more fragile and unsustainable. The EU Green Deal has a goal of cutting the use and risks of chemical fertilizers by 50 percent by 2030 [9]. The European Food Safety Authority (EFSA) has announced that 98.9% of food products contain agrochemical residues (of which 1.5% exceed legal limits). In addition, plants resistance to agrochemicals (e.g., herbicides) is becoming a huge threat to crop yields in many countries [10].
Manual weeding is not only a heavy workload but also cannot easily detect weeds in a timely manner. The only solution to the problem is to increase manpower, but this will inevitably increase agricultural costs. Mechanical weed control is especially suitable for weed control in organic farmland and can also be useful in traditional farmland. On the other hand, the utilization of machines may also have a downside effect by damaging and eroding crops and the environment [11]. Currently, weed removal in crop rows still relies on manual removal in many cases, but manual weeding is less efficient. With the development of deep learning algorithms, weed management has achieved successful results. Agricultural robotics research has increased over the past few years due to the potential applications of robots and industry efforts in robot development. The role of robots in many agricultural tasks has been studied, focusing mainly on improving the automation of traditional agricultural machinery and weeding processes [12,13]. It can accurately recognize weeds and accurately deal with them, which greatly saves the use of herbicides, avoids environmental pollution and reduces agricultural costs. In smart agriculture, using sensors installed on satellites, unmanned aerial vehicles or ground tractors to separate them between weeds and crops is becoming an effective method of weed management. Remote sensing technology allows for quickly charting the distribution of weeds and crops over large areas [14]. An SVM-based system for a crop/weed detection system for tractor boom sprayers to spot spray tobacco crops in the field was constructed. Its classification accuracy is 96% [6]. In the last decade or so, Earth observation satellites have provided higher-resolution free remote sensing data, making the detection of agriculture by high-resolution satellites possible. Google Street View images were tested using a convolutional neural network (CNN), with an overall accuracy of 83.3% [15]. Laser weed control also offers a new possibility for weed removal. A YOLOX convolutional neural network-based weeding robot utilizes a blue laser to weed with a weed recognition rate of 88.94% [16]. Drones are considered to be more efficient than robotic or satellite acquisition because they can rapidly collect field data at very high spatial resolution and at low cost [17,18,19]. The most widely used application is the use of drones, which are utilized to capture RGB images and tested on a test set using SVM, KNN, AdaBoost and CNN, whose accuracies for recognizing rice weeds are 89.75%, 85.58%, 90.25% and 92.41%, respectively [20].
This paper reviews the current state of research on applying deep learning to crop and weed recognition for smart agricultural equipment. There are many previous review articles related to this topic. For example, Imran Zualkernan et al. [21] focused on new deep learning models and architectures for research using drone image data since 2018. Jiayou Shi et al. [22] presented a thorough review of the methods and applications related to crop row inspection in agricultural machinery navigation. They paid special attention to sensors and systems used for crop row detection in order to validate their sensing and detection capabilities and, thus, improve their sensing and inspection capabilities. Ana I. de Castro et al. [23] reviewed the sensor types, configurations and image processing algorithms of UAVs for agriculture and forestry applications. WenHao Su [14] discussed RGB, hyperspectral and spot spectroscopy in sensors for crop and weed identification. However, they did not provide a comprehensive introduction to intelligent agricultural equipment. We briefly describe the need for intelligent weed management and then present aspects of weed control. Section 2 focuses on the image recognition steps for smart devices, including image collection, image preprocessing and feature extraction. Section 3 describes the application of using deep learning algorithmic models for recognizing weeds in smart agricultural equipment. They mainly utilize convolutional neural networks (CNNs) and their variants, such as Faster RCNN [24], MTS-CNN [25], FHGSO-based Deep CNN [26] and DRCNN [27]. In addition to this, support vector machine (SVM) is also heavily used, mostly in agricultural equipment, such as tractors, drones, etc. [4,6,24]. Most notably, a Transformer Neural Network and its variants, for example, vit [28], Swin-DeepLab+ [29] and Deformable DETR, are used [30]. Moreover, the vit model is a relatively new proposed model, which outperforms some advanced models such as EfficientNet and ResNet, so this model has great potential [28]. With the rapid transformation of agricultural landscapes, driven by technological innovations, this review aims to synthesize the current state of the art in the application of deep learning-based smart agricultural equipment for weed and crop differentiation. By elucidating state-of-the-art technologies, identifying research gaps and suggesting potential directions for future research, this study aims to contribute to the development of intelligent and autonomous systems that empower farmers with the tools to address weed management challenges, leading to sustainable and efficient agricultural management.

2. Weed Detection Using Remote Sensing Technique

The workflow of image recognition of crops and weeds can generally be divided into four steps: image data acquisition, preprocessing, feature extraction and classification of weeds and crops [31]. The specific details are shown in Figure 1.

2.1. Image Data Collection

DL-based weed inspection and classification techniques require a sufficient quantity of labeled data. Data can be gathered using various types of sensors mounted on various smart agricultural devices. The main sensors commonly used are as follows: RGB sensors, multispectral sensors, hyperspectral sensors and LiDAR sensors. Table 1 shows the images collected by different sensors.
Visible Light Sensors are most commonly used by UAVs in precision agriculture and related smart agriculture applications. RGB imaging or color imaging has gained popularity due to its clear color-revealing principle, simple hardware structure and proven production process. The costs of the RGB cameras are comparatively inexpensive, light weight and perform well in drawing orthophoto maps that capture images and aerial videos of the entire field in a single instance. UAVs equipped with RGB cameras have the benefits of small size, low cost, great productivity and mobility [32]. Meanwhile, RGB imaging has many benefits, providing only limited data at a limited number of wavelengths [33].
Recently, technologies such as hyperspectral imaging (HSI) systems have provided a chance to quickly categorize plant species, both in the laboratory and the field. The advantage of HSI is to provide an integrated analysis of spectroscopy and the relationship between various chemical components and absorption in the spectrum. The principle of HSI spectroscopy is based on the vibration of muscles in the infrared region. Therefore, absorbance at specific wavelengths, which might be related to specific chemical bands, can be used for different materials’ classification and quality determination. Weed identification techniques based on RGB imaging are based on shape, size and color discrimination, while the use of HSI increases the value of such techniques [34]. However, hyperspectral images typically contain a great number of superfluous information, which may mask the real information of ground objects and adversely affect spectral data recognition. In addition, high-dimensional spectral data not only increase temporal and spatial complexity but also tend to cause a dimensional disaster. To address the above problems, Zhihua Diao et al. [35] proposed a lightweight three-dimensional convolution neural network model. An image enhancement method was used to improve the training results to address the problem of sparse training samples in hyperspectral images. A lightweight unit module was introduced on this basis to reduce the number of parameters in the network. Meanwhile, Zhaoxia Lou proposed a 3D-CNN model for predicting the CCI of a competition index. There are two key aspects of hyperspectral band selection, the effective preservation of information and the elimination of redundancy. Many hyperspectral studies use VIP for band selection because it performs better in terms of information preservation. However, this method has the limitation of retaining an excessive number of bands in the band selection process, which may identify irrelevant bands as significant. Therefore, the use of the VIP method in band selection may require further research [36,37].
Compared to hyperspectral cameras, a multispectral camera is lightweight, low-cost and has high spatial resolution, making it suitable for large areas [38]. In contrast to RGB cameras, multispectral cameras have additional spectral bands and are capable of sensing radiation in both the invisible (red-edge and near-infrared) and visible segments of the spectrum, typically spanning four to six bands. The inclusion of a reflectance calibration panel makes multispectral cameras less susceptible to environmental variation [39,40]. A multispectral image is essentially a collection of grayscale images, with each image corresponding to a specific wavelength or band of wavelengths in the electromagnetic spectrum. Multispectral imaging (MSI) involves capturing images from various spectral bands to gather both spatial and spectral information. MSI technology enables the creation of wavelength channels in the near-UV, visible, near-IR, mid-IR and far-IR bands [33]. One of the most commonly used techniques for the composition of multispectral images is the co-registration of the bands of interest. The images captured by the multispectral cameras show significant band misregistration effects due to lens distortion and the varying viewing angles of each lens or sensor [41]. To obtain accurate spectral and geometrical information, a precise geometric distortion correction and band-to-band co-registration method is necessary [42]. Multispectral imaging, with the advantage of light hardware and faster calculation speed, is emerging as the successor to hyperspectral technology.
Thermal infrared sensors help to capture the temperature of the objects, generate images and display the same based on the information collected. Infrared sensors and optical lenses are used in thermal cameras to capture thermal energy [43]. The development of higher-resolution thermal imaging systems compatible with unmanned aerial vehicles (UAVs) has facilitated the practical application of thermal imaging in agriculture. The use of thermal measurement, in conjunction with other sensor measurements, such as hyperspectral, visible and optical distance, has proven to be more effective in field-scale crop phenotyping [44]. When combined with deep learning, remote heat sensing technology is able to recognize crops and weeds and crop stress assessment [45].
LiDAR, which stands for Light Detection and Ranging, is a highly advanced and dependable sensor that has been widely used in the fields of crop row detection and robotic navigation. This sensor is famous for its high precision, wide range and strong immunity to interference [46]. LIDAR works on the principle that the transmitting system emits visible or near-infrared light waves. These light waves are then reflected off the target and detected by the receiving system. The data obtained are subsequently processed to generate parametric information, including distance. LiDAR sensors have been utilized in crop row detection to provide highly accurate and detailed 3D maps of crop canopies [47]. Additionally, LiDAR sensors have the capability to penetrate vegetation and capture ground surface data, facilitating the detection of crop rows, even in densely vegetated fields [22]. LiDAR can be used in intensive agricultural scenarios.
Table 1. Examples of public datasets.
Table 1. Examples of public datasets.
DatasetCrop and WeedSensorNumber of ImagesReference
Dataset of annotated food crops and weed images6 crops, 8 weedsRGB1176Sudars, K., et al. [48]
DeepWeeds8 weedsRGB and Gound-based weed control robot (AutoWeed)17,509Olsen, A., et al. [49]
Weed-Corn/Lettuce/Radishmaize, lettuce, radishRGB7200Jiang, H.H., et al. [50]
Sugar Beet/Weed Datasetsugar beetMultispectral and Micro Aerial Vehicle (MAV)465Sa, I., et al. [51]
Rumex and Urtica weed plants DatasetRumex and Urtica weed plantsRGB and Crawler robots10,000Binch, A. and C.W. Fox [52]
Multispectral Lettuce DatasetLettuce and weedsMultispectral bands and UAV100Osorio, K. et al. [53]
Early crop weedtomato, cotton, velvetleaf and black nightshadeRGB508Espejo-Garcia, B. et al. [54]
AIWeedsflax and 14 most common weedsRGB10,000Du, Y., et al. [5]
TobSetTobacco Crop and WeedsRGB1700Alam, M.S., et al. [55]
Crop and weedMaize, the common bean, and a variety of weeds RGB and Autonomous electrifier robot 83Champ, J., et al. [56]
Datasets for sugar beet crop/weed detectionCapsella bursa pastoris RGB-NIR and BOSCH Bonirob farm robot 8518Di Cicco, M., et al. [57]
RGB—red, green, blue; NIR—near infrared.

2.2. Preprocessing

After acquiring data from various sources, it is essential to prepare the data for the training, testing and validation of models. Raw data may not always be suitable for deep learning (DL) models. Approaches for dataset preparation include the application of various image processing techniques, data labeling, utilization of image enhancement methods to augment the input data and introduce variations, as well as the generation of synthetic data for training. The commonly used image processing techniques are removal of background, resizing of captured images, green component segmentation, removal of motion blur, denoising, image enhancement, extraction of color vegetation indices and alteration in color models [58]. Table 2 demonstrates the effect of different image enhancement techniques on segmentation.

2.2.1. Image Resizing

Achieving good accuracy with lower patch sizes proved to require less training time for the model. To expedite processing and reduce computational complexity, many studies performed image resizing operations on the dataset before feeding it into the deep learning (DL) model. Following the collection of field images, their resolution was adjusted to meet the DL network’s requirements [58]. Julien Champ et al. [56] resized the image so that their shorter edge was 1200 pixels and the longest one 2048 pixels. This allowed the model to be run in a reasonable time on a standard graphics processing unit. Reenul Reedha et al. [28] extracted the crop and weed image patches from the bounding boxes. Then, the image patches were resized to 64 × 64 pixels. This choice of image size aligned with the dimensions of the bounding boxes, possibly corresponding to the altitude at which the UAV was flown and the size of the crops in the study field. Images with high resolution are sometimes split into a number of patches to reduce the computational complexity. Ramirez et al. [60] captured only five images at high resolution using a drone, which were then segmented into non-overlapping chunks and chunks with overlap. These adjustments to the image pixel size can reduce the computational complexity and decrease the computational duration of the DL model to achieve optimal results.

2.2.2. Image Enhancement and Denoising

Image enhancement and denoising such strategies can effectively enhance the accuracy of algorithm recognition. Reenul Reedha et al. [28] utilized data augmentation strategies to enrich datasets, including random resized crop, color dithering and rand augments. This technology is achieved using Keras ImageDataGenerator, which instantly generates enhanced images. As a result, the basic ViT B-16 model reached a recognized accuracy of 99.4%. The use of data augmentations aimed to enhance the model’s robustness and generalization capabilities. Aichen Wang et al. [59] assessed the performance of the DL model based on the input representation of images. They applied many image preprocessing operations, such as histogram equalization, automatic adjustment of the contrast of images and deep photo enhancement. Babu et al. [27] performed image enhancement through CLAHE, which allowed for better visual interpretation of images. CLAHE has superior contrast limiting compared to ordinary adaptive histogram equalization. In conventional adaptive histogram equalization, the noise in the near-constant regions in images is magnified. The CLAHE algorithm improves the image contrast and limits the amplification, improving the quality of the image. CLAHE is widely used for enhancing medical imagery, satellite images, etc. Dmitrii Vypirailenko et al. [61] utilized two methods for data enhancement. The first was to resize the image to 128 × 128 and then enhance the data by horizontal and vertical flipping, panning and rotating. They also used random contrast correction as an enhancement method to ensure the effectiveness of the enhanced image. Another approach was to use random affine transformation. Each enhancement method was applied to an image when it passed to the model. They also used weights in cross-entropy loss to overcome the imbalance in the dataset. The result of the enhancement should be similar to the image taken at the real site. In conclusion, effective enhancement and denoising of images have a significant impact on the recognition of the algorithm.

2.2.3. Background Removal

Background removal has an important role in weed identification. The aim of segmentation is to extract plant ROI by segregating the background (i.e., soil, stones, etc.) from the vegetation (i.e., leaves of different weeds). Zhaoxia Lou et al. [37] extracted the vegetation canopy spectra for the acquired images. The contrast between the vegetation canopy and the soil background was improved using OSAVI. Mask images of soil background and vegetation canopy were generated through a threshold segmentation method, effectively eliminating the soil portion from the digital surface model (DOM) image, retaining only the vegetation canopy area. Li et al. [34] developed a threshold segmentation algorithm involving spectral data extraction with a threshold of 0.19 at a 950 nm wavelength. The mask generated in this way was multiplied by the original HSL image, and the resulting image contained only plants. At the same time, they used a simple linear iterative clustering algorithm to segment the plant images into hyperpixels. This was accomplished by taking the similarity in spectral and spatial domains into account when grouping pixels into clusters. The results show that the separation of a crop from the background can be achieved by spectral characterization and threshold adjustment. MLP developed using Sp data is a more robust and reliable method compared to traditional classification methods. Similarly, a threshold adjustment in the color is utilized to achieve separation of the crop from the background. Borja Espejo-Garcia et al. [54] first normalized the R, G and B channels in the image for the green channel and then used the ExG (excess green) values to be indexed by the initial vegetation segmentation, followed by OTSU thresholding of the grayscale image to obtain a binary mask. Based on this method in threshold segmentation, many algorithms also obtained more than 95% accuracy in weed identification. Gee, C. et al. [62] proposed a new vegetation index called MetaIndex, which combined the advantages of six vegetation indices. The method refined the results by geodesic segmentation and obtained a black-and-white vegetation image, also known as a black-and-white vegetation mask.

2.3. Feature Extraction of Weeds

In agriculture, there are four groups of descriptive features: visual textures, spatial contexts, spectral features and biological morphological features [63].

2.3.1. Visual Texture Feature

For textural features, humans can judge them through their senses, such as identifying whether soft or hard, rough or fine, horizontally or vertically corrugated, etc. [64]. Research on texture-aware properties has its origins in computer vision as well as cognitive science. In computer vision-based approaches, visual textures have played a key role in image understanding. And because the texture of the local image descriptors is pooled in an unordered manner, the texture of the image is represented by computing the intensity of the clustered pixels in the space, and six common variability directions are identified [65]. Figure 2 is a sample image of texture-based segmentation using a Gabor filter. The Gabor filter, which is a group of Gabor wavelets, automatically determines the boundaries between tobacco and non-tobacco objects (weeds) based on their texture characteristics. The extracted Gabor texture features are input to a k-means clustering algorithm. This classifies textured regions of tobacco from other texture classes (weeds), as shown in Figure 2. It is evident from (b) that the tobacco plant has prominent texture features, as compared to the surrounding objects. Table 3 shows a table of deep learning recognition based on texture feature weed identification.
GLCM is a way to define the texture of images using the information of intensity values that co-occur spatially. The technique used texture features derived from a gray-level co-occurrence matrix (GLCM). The next step was the extraction of four texture features from GLCM. These features include contrast, correlation, energy and homogeneity, with 73% accuracy using the Radial Basis Function (RBF) kernel in the support vector machine (SVM) [66]. The Gabor wavelet transform enables the analysis of image scenes both in spatial and frequency domains. It is important to note that the wavelet transform of an image is a well-established multi-resolution filtering technique for extracting texture features. Each derived (preprocessed) image was filtered with a bank of Gabor wavelet filters computed with designated lower (Ul) and higher (Uh) frequencies selected to be 0.1 and 0.5, respectively. Four levels of orientation and ten levels of scale were chosen [67].
Yajun Chen et al. [3] identified six texture features, comprising a histogram of oriented gradient features, rotation-invariant local binary pattern (LBP) feature, Hu invariant moment feature, Gabor feature, gray-level co-occurrence matrix, and gray-level-gradient co-occurrence matrix. These six feature descriptors were combined to create a set of 18 feature combinations. For the problem of image size normalization, they proposed a strategy that kept the shape of the leaves unchanged and supplemented 0 pixels in the blank area of the normalized size. Lei Zhang et al. [68] proposed a weed recognition method for support vector machines using any combination of three sets of texture features, including oriented gradient histogram features, rotation-invariant local binary pattern (LBP) features, and grayscale co-occurrence matrix (GLCM). The application of six different texture features for weed identification is enumerated in Table 2. For hybrid feature extraction, the accuracy obtained using machine learning is greater than single feature extraction, and the accuracy of deep learning is greater than machine learning. For the study of deep learning in crop and weed recognition, hybrid texture features can be utilized.
Table 3. Deep learning recognition based on texture feature weed identification.
Table 3. Deep learning recognition based on texture feature weed identification.
Feature CombinationMethodsCropWeedTest AccuracyReference
LBPANN with 15 units in ensemblecarrotWeed83.5%Lease, B.A., et al. [7]
GLCMSVMRiceGrasses73%Ashraf, T. and Y.N. Khan [66]
LBPSVMspinachAshwagandha of the quinoa family, prickly pear of the aster family83.78%Miao, R., et al. [69]
LBPk-FLBPCMbroadleaf canola wild carrot>96.75%Vi Nguyen Thanh Le et al. [70]
GaborLDAblueberryGoldenrod, lamb’s-quarters, sheep sorrel, goldenrod, poplar spreading dogbane, mouse-eared hawkweed and a few black bulrushes.81.4%Ayalew, G., et al. [67]
GLCM-MDA-WDGNCropDetect broadleaf weeds99.4%Raja, G., et al. [71]
GaborSVMoil palmbroad weed, narrow weed95.0%Zaman, M.H.M., et al. [72]
GaborMLPNNoil palmbroad weed, narrow weed94.5%Zaman, M.H.M. et al. [72]
LBP+GLCMGA-SVMlettuce Chenopodium serotinum, Polygonum lapathifolium87.55%Zhang, L., et al. [68]
LBP+GLCMSVMlettuce Chenopodium serotinum, Polygonum lapathifolium81.33%Zhang, L., et al. [68]
HOG+LBP+GLCMGA-SVMlettuce Chenopodium serotinum, Polygonum lapathifolium86.02%Zhang, L., et al. [68]
HOG+GLCMGA-SVMlettuce Chenopodium serotinum, Polygonum lapathifolium85.46%Zhang, L., et al. [68]
GGCM+RotLBPSVMCropCirsium setosum (Willd.) MB, Poa annua L., Eleusine indica (L.) Gaertn., and Chenopodium album L.97.50%Chen, Y., et al. [4]
LBP—local binary pattern; GLCM—Gray-Level Co-occurrence Matrix; HOG—Histogram of Oriented Gradients; SVM—Support vector machine; ANN—Artificial Neural Network; LDA—Linear Discriminant Analysis; k-FLBPCM—filtered Local Binary Patterns with contour masks and coefficient k; GA—Genetic Algorithm; MLPNN—Multi-layer perceptron neural networks.

2.3.2. Spatial Context Feature

Plant discrimination based on morphological and spectral properties is prone to variations in plant appearance, exhibiting significant differences in the field, across fields and during the growing season. This variability makes the detection method less stable. In contrast, the sowing pattern of crops is relatively stable, as most crops are sown or planted in rows following a predetermined pattern. Leveraging spatial contexts or position information can enhance discrimination accuracy [73]. In a crop field, crops are often planted regularly in the field, and spatial coordinates can be used to discriminate between crops and weeds. Weeds and crops can also be identified by spatial features [64]. Considering that most crops are sown or planted in rows with a predetermined pattern, spatial contexts or position information can contribute to improving discrimination accuracy. For cereals, the detection of inter-row weeds can be effectively achieved by identifying the centerline and edge of crop rows between adjacent crop plants. Figure 3 shows a sample image of bean and spinach based on spatial feature recognition.
The Hough transform is a widely employed method for identifying linear features in an image. It works by representing a straight line as a spike in parameter space, where the parameters correspond to the characteristics of the line. In addition, the linear Hough transform can be utilized for detecting or analyzing arbitrary (non-parametric) curves by examining the shape of peaks or their locations in the parameter space [74]. Teplyakov et al. [75] proposed a lightweight Artificial Neural Network for line detection with several convolutional layers and a fast Hough transform layer that can be trained in an end-to-end manner. They proposed to use fast Hough transform (FHT) with O(N2logN) complexity. The FHT approximated the lines with dyadic patterns and utilized an efficient solution for summation. In complex backgrounds, the model of YOLOv5s was more accurate than the detection of Hoff variations and was faster. In order to solve the problems of large memory overhead, long time consumption and low recognition accuracy of offset Hough transform, slam, N. et al. [76] proposed an efficient circle localization algorithm based on multi-resolution segmentation (two-step optimized Hough transform). First, the target circle was obtained by adaptive image preprocessing to determine the location of the effective search area. Then, high-quality images were separated by shape quality inspection to be used as accurate data sources. Finally, the location accuracy is improved to the sub-pixel level using least squares circle fitting. The effects of burrs, misalignments, defects and contamination are also reduced. The extraction of spatial features can also be used as an auxiliary recognition criterion when the UAV is flying overhead.

2.3.3. Spectral Feature

Spectroscopy is used to acquire spectral information over a wide spectral range, in which specific frequencies of vibrations can be perceived that match the jump energy of a key or group. Spectroscopy is also categorized in many ways; the common ones are Point Spectroscopy, RGB and hyperspectral imaging, fluorescence spectroscopy and multispectral Imaging. The theoretical basis for using spectral detection is that weed rivalry leads to changes in plant physiology that alter light-absorbing and crown reflectance properties [37]. Figure 4 shows the regions of interest for corn seedlings and weeds on hyperspectral images and the corresponding average spectral curves. Table 4 shows shows a sample image of hawkweed flowers based on spatial feature recognition.
Islam et al. [77] employed RGB images captured by RGB cameras mounted on a drone. They extracted the reflectance of red, green and blue bands and subsequently calculated vegetation indices, including normalized red band, normalized green band and normalized blue band. The purpose of this normalization was to reduce the effects of different lighting conditions on the color channels. Moreover, in addition to RGB data, Fawakherji et al. [78] took into account near-infrared (NIR) information, generating four channel multispectral synthetic images. They extracted the plant cover from the entire image cover. The plant cover was a binary image where the plant pixels to be learned were set to 1, and the other pixels were set to 0. The plant cover was then mapped to a realistic multispectral image, and the resulting image was used for data enhancement. The use of an NIR channel helps to enhance the accuracy of the activity for which vegetation inspection is required. Photosynthesis in healthy green plants leads to the absorption of more solar energy in the visible spectrum, resulting in a low reflectance level in the RGB channels. Similarly, the reflectance of the NIR spectrum is affected by the same phenomena with opposite results, with a high reflectance level in the NIR channel, where generally 10% or less of radiation is absorbed [78,79]. Jinya Su et al. [38] studied that the triangular greenness index (TGI) consisting of green-NIR was the most discriminative SI. Its recognition accuracy was 93.0%. Utilizing thermal measurements in conjunction with other sensor data, such as hyperspectral, visible and optical distance, has demonstrated increased effectiveness in field-scale crop phenotyping [80,81,82].
Table 4. Deep learning recognition based on spectral feature weed identification.
Table 4. Deep learning recognition based on spectral feature weed identification.
Feature CombinationMethodsCropWeedTest AccuracyReference
LBPANN with 15 units in ensemblecarrotWeed83.5%Lease, B.A. et al. [7]
430 hyperspectralRFgrasslands, meadows, and forestsBlackberry, various species of goldenrod, wood small-reed grass>78.4%Sabat-Tomala, A. et al. [83]
30 MNFSVMgrasslands, meadows, and forestsBlackberry, various species of goldenrod, wood small-reed grass>85.0%Sabat-Tomala, A. et al. [83]
ThermalMLsoybeankochia, waterhemp, redroot pigweed, and common ragweed82%Eide, A. et al. [44]
RGB+NIRcGANsunflower crops, Sugar beetWeed94%Fawakherji, M., et al. [78]
RGBRF and SVMchilliunwanted weed and parasites within crops96%, 94%Islam, N. et al. [77]
terahertz spectralWheat-V2wheatwheat husk, wheat straw, wheat leaf, wheat
grain, weed, and ladybugs
>96.7%Shen, Y. et al. [84]
hyperspectrallightweight-3D-CNNCrop seedlingsWeed>97.4%Diao, Z. et al. [35]
MultispectralU-Net and FPNsunflower(Chenopodium album L., Convolvulus arviensis L.; and Cyperus rotundus L.90%, 89%Lopez, L.O. et al. [41]
MultispectralSFS-Top3Triticum aestivum L.Alopecurus myosuroides93.8%Su, J. et al. [38]
MultispectralRF and XGBpasture lands and forest meadows of New ZealandHawkweeds (Pilosella spp.)97%, 98%Amarasingam, N. et al. [40]
LBP—local binary pattern; MNF—Maximum Noise Fraction; SVM—Support vector machine; ANN—Artificial Neural Network; RF—Random Forest; ML—Machine Learning; cGAN—Conditional Adversarial Nets; CNN—Convolutional Neural Networks; U-Net—Convolutional Networks for Biomedical Image Segmentation; FPN—Feature Pyramid Network; SFS—Shape from shading; XGB—Tree Ensemble.

2.3.4. Biological Morphological Features

Biological morphological features are five characteristics represented by the shape, structure, size, pattern and color of an organism. In agriculture, biomorphic traits can identify the biomorphic characteristics of weeds and crops, although they are more susceptible to leaf-folding or shading problems. They also have a high accuracy rate after training. This current deep learning algorithm approach based on biomorphic feature recognition is innovative [64]. Figure 5 illustrates a schematic of biometric extraction through leaves.
Color features are extracted from the pixels of images, with advantages of stable features after rotation, scale and translation changes [86]. Weeds and crop seedlings are the same green color. It is difficult to distinguish them by color alone [30]. The extraction of color features requires the use of color moments, which provide unique features for distinguishing objects based on their color. Color moments are founded on the probability distribution of image intensities, characterized by statistical moments, like mean, variance and skewness. These three are the central moments of intensity distribution and can be easily found for all color spaces, such as RGB, HSV and L*a*b [6]. Apart from these color features, there are other shape descriptors/features proposed by researchers. Tannouche et al. [87] used a region-based adjacencies descriptor to discriminate between Dicot and Monocot weeds. The proposed descriptor calculated two numbers of adjacencies between a given original pixel and their adjacent pixels. The first was the number of horizontal and vertical adjacencies, and the second one was the number of diagonal adjacencies. Shape factors that were generated by transformations typically required the use of information about the boundaries or contours of the segmented region and required complex calculations, compared with region-based shape measurements and indices. Therefore, they are often referred to as region-based shape descriptors. Hu’s moment invariants (MIs) are popular shape descriptors, which are normalized functions created based on the information of both shape boundary and interior region [88]. Weed detection using machine vision relies on features, like plant color, leaf texture, shape and patterns. Drought stress can impact leaf color and morphological features in plants, potentially affecting the reliability of machine vision-based weed detection [89]. But they still lack universal segmentation capabilities for different crop varieties with varying leaf shapes and canopy structures. Designing a universal 3D segmentation method for different varieties at multiple growth stages is the current research frontier of plant phenotyping [90]. Biomorphic feature extraction has the advantages of strong interpretability, high stability and wide versatility in weed recognition, and it is especially suitable for scenarios that require the identification of different types of plants.
In deep learning, hybrid feature extraction refers to the simultaneous use of multiple levels, sources or types of features for model training and recognition. Different levels and types of features contain different levels of abstraction and semantic information. Hybrid feature extraction captures this diverse information and enables the model to represent the input data more richly. Single feature extraction may ignore or lose some critical information. Using features from multiple sources can make the model more robust and better adaptable to variations and noise in the input data.

3. Applications for Weed/Crop Discrimination

Deep learning algorithms have developed rapidly over the past few years, leading the possibility of smart farms. Many scientists have studied the problem of applying deep learning algorithms to smart agricultural equipment to recognize weeds and crops.

3.1. Learning Algorithm

Deep Neural Networks (DNNs) aim to replicate the communication between biological neurons through layers of nodes, comprising input, hidden and output layers [91]. Deep Neural Networks (DNNs) extend the complexity, number of connections and hidden layers of Artificial Neural Networks (ANNs). A convolutional neural network (CNN), a type of DNN, assigns learnable weights and biases to different aspects and objects within input images to distinguish and classify objects, such as weeds [1]. Unlike traditional machine learning algorithms that require manual feature selection and classifier choice, deep learning algorithms automatically extract features through self-learning from errors. This automatic feature extraction sets deep learning apart from the broader field of machine learning [1,92,93]. To train and evaluate a deep CNN model, each input image undergoes a sequence of convolution layers with filters, followed by flattening, pooling layers and fully connected layers. CNNs autonomously capture the spatial and temporal dependencies within the input image using relevant filters, resulting in enhanced and more efficient image processing. This is achieved with a significantly reduced number of estimable parameters and processing time. Due to potential slight jittering in the graphical information formed by adjacent positions, the pooling operation extracts essential information from the upper feature map. Common pooling operations include maximum pooling and average pooling. The model maintains translation and rotation invariance while preserving crucial features [94].
The attention mechanism is becoming a key concept in the deep learning field. The inspiration for attention comes from the human perception process, where individuals naturally concentrate on specific information, simultaneously neglecting other perceptible details. This attention mechanism has significantly influenced the realm of natural language processing, particularly in prioritizing a subset of crucial words. The self-attention paradigm has evolved from the attention concepts, demonstrating enhancements in the performance of deep networks [95]. The utilization of the self-attention mechanism enables the establishment of global references during both model training and prediction. This significantly reduces the training time required to attain high accuracy [96,97]. The self-attention mechanism is a crucial element in transformers, explicitly modeling interactions among all entities in a sequence for structured prediction tasks. Essentially, a self-attention layer updates each element of a sequence by consolidating global information from the entire input sequence. In contrast to the fixed K × K neighborhood grid of convolution layers, the self-attention’s receptive field encompasses the entire image. This expanded receptive field of self-attention enhances its capability compared to CNN, all without introducing the computational costs associated with excessively large kernel sizes. Moreover, self-attention remains invariant to permutations and variations in the number of input points. Consequently, it can seamlessly operate on irregular inputs, in contrast to standard convolution that necessitates grid structures [98].
Overall, the attention mechanism has some advantages in improving model performance, processing sequence data and improving interpretability. However, for specific tasks, the attention mechanism is not always necessarily superior to the traditional deep neural network structure but, rather, the appropriate model structure should be selected according to the specific application scenario and task requirements.

3.2. Recognition Applications

The above describes the part of deep learning collecting images and pre-processing. The following is a review of the latest applications of these techniques to recognize weeds and weed control in smart agricultural equipment. Table 5 demonstrates the accuracy of different deep learning algorithm models for crop/weed recognition.

3.2.1. Spot Photographic Image Recognition

Spotting refers to taking images with a cell phone or camera at a fixed location. This method of acquiring images is relatively simple but requires a great deal of labor to take them. Fixed-point photography usually occurs in relatively fixed environments, which means that the images are relatively consistent in terms of background, lighting and camera angle. This consistency helps train deep learning models to better adapt to specific environments and conditions. It also facilitates the labeling of the images, making it easy to improve the training efficiency of deep learning.
Taskeen Ashraf et al. [66] sought to classify images based on grass density into three classes. The first approach utilized texture features extracted from the gray-level co-occurrence matrix (GLCM) with a Radial Basis Function (RBF) kernel in a support vector machine (SVM), achieving an accuracy of 73%. Another technique employed scale and rotation-invariant moments to classify grass density. The second technique outperformed the first, achieving an accuracy of 86% with a Random Forest classifier. This kind of quantitative agricultural spraying for different densities of weeds can effectively reduce the use of pesticides. To improve weed recognition, some scientists have combined machine learning with deep learning. Tao T.et al. [99] proposed a deep convolutional neural network with a support vector machine classifier aimed at improving the classification accuracy of winter oilseed rape seeding and field weeds. They used a VGG network model with true-color images (224 × 224 pixels) of oilseed rape/weeds as input. The proposed VGG-SVM model obtained a higher classification accuracy, greater robustness and real time. Borja Espejo-Garcia et al. [54] proposed a novel crop/weed identification system. The method involved fine-tuning pre-trained convolutional networks, such as Xception, Inception-Resnet, VGNets, Mobilenet and Densenet. These networks were combined with “traditional” machine learning classifiers, like support vector machines, XGBoost, and Logistic Regression. These classifiers were trained with features extracted from deep learning models. The aim of this approach was to prevent overfitting and achieve a robust and consistent performance. Attention mechanisms have become increasingly popular in recent years and can greatly increase the rate of recognition. Helong Yu et al. [29] introduced a soybean field weed recognition model named Swin-DeepLab. This model was built upon an enhanced DeepLabv3+ model, incorporating a Swin transformer as the feature extraction backbone. Furthermore, a convolution block attention module (CBAM) was integrated after each feature fusion to improve the model’s utilization of focused information within the feature maps. The proposed network can further address the problem of weed recognition in intensive agricultural scenarios.

3.2.2. Satellite Photo Image Recognition

In recent decades, substantial progress has been achieved in sensing technologies, wireless communication, autonomous systems and artificial intelligence through collaborative research efforts worldwide [110]. Agricultural satellites use remote sensing techniques, including visible, infrared and microwave radiation, to capture information about the Earth’s surface. These satellites can provide high-resolution images that can be used to monitor different aspects of agricultural land. Some civil satellites in agriculture, combined with high-performance sensors, have produced a large number of images of farmland with various temporal, spatial and spectral resolutions. Among other things, these images are of great significance to farmers for seeding scheduling, pest and disease tracking and weed control [111]. Satellite remote sensing image acquisition, although providing large spatial coverage, has limited its development in the field of smart agriculture due to fixed and long revisit intervals and problems such as cloud cover [112].
Anita Sabat-Tomala et al. [83] conducted a comparison between two machine learning algorithms, support vector machine (SVM) and Random Forest (RF), for the identification of Solidago spp., Calamagrostis epigejos and Rubus spp. on HySpex hyperspectral aerial images. The classifications were performed on 430 spectral bands and on the most informative 30 bands extracted using the Minimum Noise Fraction (MNF) transformation. While satellite images are less suitable for weed recognition, semantic segmentation of remotely sensed images proves to be more effective. In the realm of digital agricultural services, there is a growing need for farmers or their advisors to provide digital records of field boundaries. Automatic extraction of field boundaries from satellite imagery would reduce the reliance on manual input of these records, which is time consuming and would underpin the provision of remote products and services [113,114].

3.2.3. Application of Drone Weed Identification

An unmanned aerial vehicle (UAV) is a powered flying vehicle that operates without a human operator. It can fly autonomously or be controlled remotely, equipped with various payloads. UAVs are rapidly advancing due to their benefits in flexible data acquisition and high spatial resolution. They offer a potent technical solution for numerous applications in precision agriculture (PA) [115,116]. For better acquisition of image data, the flight altitude of the UAV is an important parameter as it has a great impact on the resolution of the image, the flight time and the computational cost of image processing [117]. Moreover, UAVs have the flexibility to carry various payloads tailored to specific purposes. In precision agriculture (PA), UAVs are commonly equipped with remote sensors, such as RGB imaging, multispectral and hyperspectral imaging sensors, thermal infrared sensors, Light Detection and Ranging (LiDAR) and Synthetic Aperture Radar (SAR) to capture agricultural information [112,118]. UAVs are currently used for surveillance [119], disease detection [120] and weed management [20,24]. The use of these data allows for the identification of specific spatial features and time-varying information on crop characteristics as well as the targeted spraying of pesticides and fertilizers, resulting in a reduction in pests and diseases and an increase in crop yields and quality [115,121]. Figure 6 shows some of the uses of drones in smart agriculture.
Hile Narmilan Amarasingam et al. [40] studied the potential of machine learning (ML) algorithms for the detection of mouse-ear grass leaves and flowers from multispectral (MS) images acquired by unmanned aerial vehicles (UAVs) at different spatial resolutions and compared different machine learning. The highest machine learning recognition was achieved with 100% accuracy. Jinya Su et al. [38] analyzed and mapped blackgrass in wheat fields by incorporating unmanned aerial vehicles (UAVs), multispectral imagery and machine learning techniques. Eighteen widely used techniques were produced from five raw spectral bands. Various feature selection algorithms were then used to refine the simplicity and experience interpretation of the model. The selection of these raw spectral segments and the selection of vegetation indices (VIs) were important for weed identification in multispectral images. Mohd Anul Haq et al. [103] proposed a novel CNNLVQ model to detect weeds in soybean crop images and distinguish between grassy weeds and broadleaf weeds. The uniqueness of their study lies in the development of this innovative CNNLVQ model, meticulous hyperparameter optimization and the utilization of authentic datasets. Faster R-CNN stands out as a deep learning approach incorporating a region proposal network (RPN). This network, formed by merging convolutional features with a classification network, facilitates training and testing through a seamless process. It results in a fast detection rate and outperforms other conventional object detection methods. Shahbaz Khan et al. [24] optimized the architecture of the traditional Faster-R-CNN. Residual Network 101 (ResNet-101) was deployed as a convolutional neural network instead of the normally used Visual Geometry Group 16 (VGG16). Anchors are classified using a traditional SoftMax classifier. In addition, Saad Abouzahir et al. [102] used HOG blocks as key points to generate visual words based on the Bag of Visual Words (BOVW) method and feature vectors as histograms of these visual words. And a backpropagation neural network was used to detect weeds and classify plants from three different crop fields (sugar beet, carrot, soybean). The algorithm had 97.7%, 93% and 96.6% accuracy in weed and crop differentiation.
Drones have an important role in identifying weeds in fields and spraying pesticides in real time. Shahbaz KhanI et al. [116] developed a deep learning-based real-time recognition system for drones. The capability of the system is achieved through a two-step process where the target recognizer part is based on a CNN model. The developed deep learning system achieved an average F1 score of 0.955, while the classifier recognition average computation time was 3.68 ms. This deep learning model can effectively solve the problem of real-time pesticide spraying by UAVs to recognize weeds. Meanwhile, Gunasekaran Raja et al. [71] proposed a UAV-assisted weed detection method using a modified multi-channel gray-scale covariance matrix (GLCM-M) and normalized difference index with red threshold (NDIRT) index (DA-WDGN) to assist the weed detection process. In DA-WDGN, the UAV incorporates information and communication techniques to capture far-field data and accurately detect weeds. The accurate detection of weeds limits the need for pesticides and helps to protect the environment. Reenul Reedha et al. [28] investigated a Visual Transformer (ViT) and applied it to plant classification in unmanned aerial vehicle (UAV) images. They utilized the strategy of migration algorithms to increase the effectiveness of the test set while reducing the training set. The ViT algorithm is able to efficiently process large-scale image data, thus better adapting to the large number of images produced by UAVs in aerial photography. This efficient image processing capability helps to improve the speed and accuracy of weed identification. Moreover, the ViT algorithm is based on the self-attention mechanism, which is able to capture global information in the image and not only limited to local features. This feature gives ViT and UAVs a huge advantage in the future development of weed recognition.

3.2.4. Application of Agricultural Robotics for Weed Recognition

Agricultural robots represent an important trend in modern agricultural automation. By combining machines, sensors and autonomous navigation technologies, they are revolutionizing agricultural production. Agricultural robots can include modified tractors, small ground robots and aerial robots [13]. Modern agricultural equipment integrates advanced technologies, such as artificial intelligence, navigation, sensing systems and communication, to increase agricultural productivity and promote smart agriculture [22,122,123]. Among the information, navigation data, image recognition data, etc., require the work of sensors, including monocular cameras, binocular cameras, RGB cameras, panorama cameras and spectral imaging systems [22]. In the early days of precision agriculture, most image data from fields were collected using ground cameras either mounted on unmanned ground vehicles (UGVs) or fixed next to vegetation patches [21]. Through image recognition, agricultural robots can perform laser weeding [124,125,126], spraying pesticides for weed control [13], spot picking [127,128], fertilizer application and other tasks. Figure 7 shows some of the agricultural robots used in smart agriculture.
Yajun Chen et al. [4] trained the classifier based on SVM comparing single features of six features with different fusion strategies. The highest classification accuracy was obtained by fusion feature combining rotationally invariant LBP features with a gray gradient co-occurrence matrix based on an SVM classifier and accurately detected various weeds and maize seedlings. Tufail M et al. [5] presented a machine learning-based crop/weed detection system for tractor boom sprayers to spot spray tobacco crops in the field and proposed an SVM classifier with carefully selected combinations of tobacco plant features (texture, shape and color) with a classification accuracy of 96%. Julien Champ et al. [56] trained and evaluated an instance segmentation convolutional neural network designed to segment and identify each plant specimen visible in an agricultural robot image. And they adjusted the hyperparameters of a mask region-based convolutional neural network (R-CNN) to this specific task and evaluated the resulting training model. Data augmentation via Generative Adversarial Networks (GANs) can add entire synthetic scenes to the training data, thus expanding and enriching their information content [78].
There has been a lot of progress in image recognition using smart agricultural robots, and a number of scientists are working on smart weeding by agricultural robots to reduce the burden on farmers. Yayun Du et al. [5] provided a complete process, from model training at maximum efficiency to deploying TensorRT-optimized models to single-board computers. And the performance of five different CNN models was tested. They deployed MobileNetV2 on a small autonomous robot, SAMBot, for real-time weed detection. In a previously unseen scenario in a flax field (row spacing of 0.2–0.3 m), with crops and weeds, distortions, blurring and shadows, 90% accuracy was achieved. Paolo Rommel Sanchez et al. [107] developed a modular precision sprayer that distributes the high computational load of CNNs to parallel low-cost, low-power vision computing devices. The sprayer employed a customized precision spray algorithm based on SSD-MobileNetV1 running on a Jetson Nano 4 GB. The model achieved 76% mAP0.5 at 19 fps in detecting weeds and soybeans in a widely planted field. Muhammad Shahab Alam et al. [55] developed and deployed a vision-based robotic spraying system. By using the vision system in combination with speed sensors, flow sensors and pressure sensors, the technology detected and categorized tobacco plants and weeds in real time. The use of targeted pesticide spraying technology has reduced the use of pesticides, and environmental pollution has been effectively controlled, but laser-targeted weed control is underway in terms of more environmentally friendly future development. Huibin Zhu et al. [16] designed a weeding robot based on the YOLOX convolutional neural network for removing weeds from corn seedling fields. They verified the feasibility of a blue laser as a non-contact weeding tool. Similarly Azmat Hussain et al. [126] designed a laser weeding robot based on the YOLOV5 convolutional neural network. The field trials demonstrated that the robot took approximately 23.7 h at a linear velocity of 0.07 m/s for the weeding of one acre plot. It included 5 s of laser to kill one weed plant. They proposed an innovative weeding operation method, applying herbicides after causing mechanical damage to weeds, and designed a composite intelligent in-row weeding robot based on this method. Based on the YOLOv5 algorithmic model, the detection accuracy reached 93.33% under real operating conditions. The machine was more efficient at weeding compared to simple machines and reduced the amount of pesticides used compared to chemical pesticide spraying robots [129].
Figure 7. (a) An autonomous agricultural robot for weed removal uses [130]. (b) Precision Agricultural Sprayer [6]. (c) YOLOX-based blue laser cornfield weeding robot [16]. (d) Components of the modular agrochemical precision sprayer mounted on a push-type frame [107].
Figure 7. (a) An autonomous agricultural robot for weed removal uses [130]. (b) Precision Agricultural Sprayer [6]. (c) YOLOX-based blue laser cornfield weeding robot [16]. (d) Components of the modular agrochemical precision sprayer mounted on a push-type frame [107].
Agronomy 14 00363 g007aAgronomy 14 00363 g007b

4. Discussion

In the context of the development of artificial intelligence, smart agriculture is the development direction of a large agricultural country. The development of intelligent agriculture is inseparable from the development of intelligent agricultural equipment. In recent years, agricultural robots, agricultural drones, satellites and other booming developments for the development of intelligent agriculture have provided a new program. The development of all three types of smart agricultural equipment is the mainstream of the future, and all have great potential for application in the development of smart farms. Satellites, as part of smart farming equipment, play an important role in delineating farm boundaries for effective farm management. However, it is slightly lacking in weed and crop identification. Waldner, F. et al. proposed a method to facilitate the extraction of site boundaries from satellite images [113]. The use of satellite technology to segment and monitor sites in agriculture has a number of benefits that can help farmers to plan land use more accurately. This includes identifying the most suitable locations for specific crops, avoiding overuse of land and increasing the sustainable utilization of agricultural land. And it allows for better allocation of resources, such as water, fertilizers and pesticides, reducing waste of resources and environmental pollution. As an overhead drone, it has an integral role in smart agriculture. For example, high-resolution image acquisition provides a dataset for the training and learning of deep learning algorithms; the detection and identification of crops using sensors allow for the precise application of medicine and irrigation [24,38]. Combining deep learning with drones allows for weed crop identification and targeted pesticide spraying. Based on RGB camera sensing, CNN has more than 92% accuracy in weed recognition. It has a higher accuracy rate compared to machine learning. And the results shown by Vit provide the possibility of real-time recognition of pesticide spraying by drones in the future. Weeds can be dealt with more efficiently and with less wastage of resources. Deep learning-based agricultural robots are essentially 95% accurate in weed recognition. The use of agricultural robots in agriculture is not only in data collection and weed identification and processing, as it also allows for precise picking and harvesting of crops. Overall, the combination of deep learning and smart agricultural equipment has been widely used in weed/crop identification research. In smart agriculture scenarios, deep learning has been used to solve the problem of crop and weed identification. Deep learning has four steps in weed/crop detection: data collection, dataset preparation, weed detection and weed/crop localization and classification. First of all, for the collection of datasets, with the help of intelligent agricultural equipment, the collection of images is no longer a problem. Moreover, a variety of sensors have improved the quality of image acquisition. Multispectral cameras have some advantages over RGB cameras and hyperspectral cameras in that they can improve more spectral bands than RGB cameras and are cheaper than hyperspectral cameras, which can be utilized in smart agriculture to reduce the cost and improve the quality of collected images [38,39]. Thermal measurements from thermal infrared sensors can complement measurements from other sensors, such as hyperspectral, visible and optical distance, and have also been shown to be more effective in field crop phenotyping [44]. For training datasets, manual labeling by researchers is still required, which is a very labor-intensive task. However, semi-supervised learning algorithms and unsupervised learning algorithms are a worthwhile solution for the future, as they can perform labeling during iterations, greatly reducing the human workload. Feature extraction of weeds and crops is an important part of the recognition process, and the main features are texture features, spectral features, spatial features and biomorphic features. All four features have a great role in weed recognition by deep learning, but the current trend in recognition is hybrid feature extraction of spectral features, texture features and biomorphic features. The similarity between weeds and crops makes using a single image feature to detect weeds and crops almost impossible. The commonly used image features can achieve the purpose of weed detection, but the experimental accuracy is low, and the stability is poor in a nonideal environment due to the complex interference factors in the actual field. Acquired images need to be preprocessed for better recognition and classification. The scientists segmented the crop and background by threshold segmentation and color segmentation and performed noise reduction on the images [34,131].
The performance of different deep learning algorithm models in weed/crop identification is influenced by a variety of factors. The main factor is the network structure. In general, lightweight CNN models are less accurate in weed recognition compared to CNN models. However, lightweight CNN models are usually designed to be more concise, using fewer parameters and computational resources, and they require relatively less memory space [104,109]. Some of the lightweighting techniques include network pruning, quantization and depth-separable convolution, which aim to minimize the size of the model while maximizing the retention of its representational power. Due to the performance improvement in the Faster R-CNN architecture, it is possible to perform target detection, image classification and instance segmentation simultaneously in a single neural network. The researcher improved the Mask R-CNN by adding an attention mechanism and deep separable convolution. This approach improves the model’s ability to represent weed-related features and reduces the number of model parameters, increasing computational speed [132]. In addition to this, the performance of deep learning algorithms is greatly influenced by the training strategy used. The training strategy involves the training process of the model, selection of hyperparameters, data augmentation, etc. For example, batch normalization of deep learning models by some researchers accelerates training and improves the generalization performance of the model [54]. In addition, the input dataset is key to training deep learning models as it is the basic source of information. The accuracy of deep learning is improved by data augmentation of sample images, as stated in Section 2 of this article. Algorithmic models such as Swin transformer and DeepLabv3+ also excel in weed identification.

5. Challenges for Weed Recognition in Smart Farming Equipment and Future Trends

In terms of future development, the combination of sensor and drone technology can effectively increase the efficiency of identification. Among the recent innovations, unmanned aerial vehicles (UAVs) or drones have demonstrated their suitability for the timely tracking and assessment of vegetation status due to several advantages, as follows: (1) They can operate at low altitudes to provide aerial imagery with ultra-high spatial resolution, allowing for the detection of fine details of vegetation. (2) The flights can be scheduled with great flexibility according to critical moments imposed by vegetation progress over time. (3) They can use diverse sensors and perception systems, acquiring different ranges of the vegetation spectrum (visible, infrared, thermal). (4) This technology can also generate digital surface models (DSMs) with three-dimensional (3D) measurements of vegetation by using highly overlapping images and applying photoreconstruction procedures with the structure-from-motion (SfM) technique [23,35,44].
The future of agricultural robotics promises more developments in weed removal: (1) Increased intelligence and autonomy: Future agricultural robots will be more intelligent, with highly autonomous decision-making capabilities. Combined with artificial intelligence and deep learning technology, the robot can analyze farmland images and data in real time, make intelligent weed identification and weeding decisions, without human intervention, and improve operational efficiency. (2) The integration of multimodal sensing technology: Agricultural robots will integrate a variety of sensors, including vision, infrared, ultrasonic and other multimodal sensors, to obtain richer and more accurate information about the farmland. This will help identify weeds more accurately and adapt to different farmland environments. (3) Efficient and precise weeding technology: Future agricultural robots will use more precise and efficient weeding technology. This will require more advanced weeding systems and automated control technologies. Although laser mowing is currently very advantageous, there are still issues to consider, such as whether mowing is safe and whether it can cause fires [124,125].
Deep learning also faces several challenges in weed and crop recognition. First, due to the small visual differences between weeds and crops, there are large similarities between categories, which leads to models that are prone to confusion. In addition, there are variations in weeds and crops such as growth stages and environmental differences, and the models need to have good generalization capabilities to accommodate these variations [73]. In addition, datasets are costly to annotate, especially when collected and labeled in a large-scale farmland environment. This poses certain difficulties in model training. To overcome these challenges, future research can be expanded in the following aspects: First, further improve the robustness and generalization ability of deep learning models, and design more effective feature extraction methods and classification algorithms for the similarities between weeds and crops. Second, develop larger-scale datasets containing samples from different times, locations and farming conditions to enhance the generalization ability of the model. At the same time, techniques such as augmented learning and transfer learning are reasonably utilized to achieve better results with fewer data. In addition, combining sensors and smart agricultural equipment technologies for the real-time identification of weeds and crops contributes to intelligent and precise decision making in agricultural production. Proper dosage of plant protection products is one of the key issues in agricultural production. Using advanced sensor technology, crop growth can be monitored more accurately. This technology allows for the timely dosing of weeds or diseases. Spraying the right amount of insecticide will neither cause contamination by using too much nor reduce crop yields by using too little.

6. Conclusions

This review concentrates on the forefront applications of intelligent agricultural equipment, specifically emphasizing crop and weed identification, pivotal components in the trajectory of smart agriculture. The integration of sensors into smart agricultural equipment assumes a critical role in data acquisition, capturing extensive sets of high-dimensional images that serve as foundational training data for deep learning algorithms. Various preprocessing techniques are employed to refine the algorithmic processes, encompassing noise reduction, background effect elimination and image resizing. Deep learning algorithms emerge as powerful tools capable of analyzing complex, high-dimensional data with distinct characteristics compared to the training set, facilitating accurate crop identification. The adoption of hybrid feature extraction techniques underscores the inherent advantages of leveraging multiple features in tandem, contributing significantly to the efficacy of weed and crop identification processes. In the realm of machine learning and deep learning, the attention mechanism stands out as a particularly valuable and promising learning algorithm. Renowned for its high accuracy and expedited processing time, the attention mechanism proves advantageous in the context of crop and weed identification. These attributes position it as a formidable asset for smart agricultural equipment engaged in real-time weeding operations within agricultural fields. The emphasis on attention mechanisms reflects a forward-looking perspective, acknowledging their potential to augment the efficiency and accuracy of smart agricultural practices, particularly in the domain of weed management.

Author Contributions

Conceptualization, W.-H.S.; methodology, H.-R.Q.; software, H.-R.Q.; validation, H.-R.Q.; formal analysis, W.-H.S.; investigation, H.-R.Q.; resources, W.-H.S.; writing—original draft preparation, H.-R.Q. and W.-H.S.; writing—review and editing, W.-H.S.; supervision, W.-H.S.; project administration, W.-H.S.; funding acquisition, W.-H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 32371991.

Data Availability Statement

Data are available on request due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Murad, N.Y.; Mahmood, T.; Forkan, A.R.M.; Morshed, A.; Jayaraman, P.P.; Siddiqui, M.S. Weed Detection Using Deep Learning: A Systematic Literature Review. Sensors 2023, 23, 3670. [Google Scholar] [CrossRef] [PubMed]
  2. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [Google Scholar] [CrossRef]
  3. Llewellyn, R.; Ronning, D.; Clarke, M.; Mayfield, A.; Walker, S.; Ouzman, J. Impact of Weeds in Australian Grain Production; Grains Research and Development Corporation: Canberra, Australia, 2016. [Google Scholar]
  4. Chen, Y.; Wu, Z.; Zhao, B.; Fan, C.; Shi, S. Weed and Corn Seedling Detection in Field Based on Multi Feature Fusion and Support Vector Machine. Sensors 2021, 21, 212. [Google Scholar] [CrossRef]
  5. Du, Y.; Zhang, G.; Tsang, D.; Jawed, M.K. Deep-CNN based Robotic Multi-Class Under-Canopy Weed Control in Precision Farming. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 2273–2279. [Google Scholar]
  6. Tufail, M.; Iqbal, J.; Tiwana, M.I.; Alam, M.S.; Khan, Z.A.; Khan, M.T. Identification of Tobacco Crop Based on Machine Learning for a Precision Agricultural Sprayer. IEEE Access 2021, 9, 23814–23825. [Google Scholar] [CrossRef]
  7. Lease, B.A.; Wong, W.K.; Gopal, L.; Chiong, W.R. Weed Pixel Level Classification Based on Evolving Feature Selection on Local Binary Pattern with Shallow Network Classifier. In Proceedings of the 2nd International Conference on Materials Technology and Energy (ICMTE), Curtin Univ Malaysia, Sarawak, Malaysia, 6–8 November 2020. [Google Scholar]
  8. Mogili, U.M.R.; Deepak, B.B.V.L. Review on Application of Drone Systems in Precision Agriculture. In Proceedings of the 1st International Conference on Robotics and Smart Manufacturing (RoSMa), Chennai, India, 19–21 July 2018; pp. 502–509. [Google Scholar]
  9. Tataridas, A.; Kanatas, P.; Chatzigeorgiou, A.; Zannopoulos, S.; Travlos, I. Sustainable Crop and Weed Management in the Era of the EU Green Deal: A Survival Guide. Agronomy 2022, 12, 589. [Google Scholar] [CrossRef]
  10. Jeanmart, S.; Edmunds, A.J.F.; Lamberth, C.; Pouliot, M. Synthetic approaches to the 2010-2014 new agrochemicals. Bioorganic Med. Chem. 2016, 24, 317–341. [Google Scholar] [CrossRef]
  11. Eyre, M.D.; Critchley, C.N.R.; Leifert, C.; Wilcockson, S.J. Crop sequence, crop protection and fertility management effects on weed cover in an organic/conventional farm management trial. Eur. J. Agron. 2011, 34, 153–162. [Google Scholar] [CrossRef]
  12. Ampatzidis, Y.; De Bellis, L.; Luvisi, A. iPathology: Robotic Applications and Management of Plants and Plant Diseases. Sustainability 2017, 9, 1010. [Google Scholar] [CrossRef]
  13. Aravind, K.R.; Raja, P.; Perez-Ruiz, M. Task-based agricultural mobile robots in arable farming: A review. Span. J. Agric. Res. 2017, 15, e02R01-01. [Google Scholar] [CrossRef]
  14. Su, W.-H. Advanced Machine Learning in Point Spectroscopy, RGB- and Hyperspectral-Imaging for Automatic Discriminations of Crops and Weeds: A Review. Smart Cities 2020, 3, 767–792. [Google Scholar] [CrossRef]
  15. Ringland, J.; Bohm, M.; Baek, S.-R. Characterization of food cultivation along roadside transects with Google Street View imagery and deep learning. Comput. Electron. Agric. 2019, 158, 36–50. [Google Scholar] [CrossRef]
  16. Zhu, H.B.; Zhang, Y.Y.; Mu, D.L.; Bai, L.Z.; Zhuang, H.; Li, H. YOLOX-based blue laser weeding robot in corn field. Front. Plant Sci. 2022, 13, 1017803. [Google Scholar] [CrossRef]
  17. Bah, M.D.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef]
  18. Teimouri, N.; Dyrmann, M.; Nielsen, P.R.; Mathiassen, S.K.; Somerville, G.J.; Jorgensen, R.N. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks. Sensors 2018, 18, 1580. [Google Scholar] [CrossRef]
  19. Oghaz, M.M.; Razaak, M.; Kerdegari, H.; Argyriou, V.; Remagnino, P. Scene and Environment Monitoring Using Aerial Imagery and Deep Learning. In Proceedings of the 15th Annual International Conference on Distributed Computing in Sensor Systems (DCOSS), Santorini, Greece, 29–31 May 2019; pp. 362–369. [Google Scholar]
  20. Zhu, S.; Deng, J.; Zhang, Y.; Yang, C.; Yan, Z.; Xie, Y. Study on distribution map of weeds in rice field based on UAV remote sensing. J. South China Agric. Univ. 2020, 41, 67–74. [Google Scholar] [CrossRef]
  21. Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382. [Google Scholar] [CrossRef]
  22. Shi, J.Y.; Bai, Y.H.; Diao, Z.H.; Zhou, J.; Yao, X.B.; Zhang, B.H. Row Detection BASED Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications. Agronomy 2023, 13, 1780. [Google Scholar] [CrossRef]
  23. de Castro, A.I.; Shi, Y.; Maja, J.M.; Pena, J.M. UAVs for Vegetation Monitoring: Overview and Recent Scientific Contributions. Remote Sens. 2021, 13, 2139. [Google Scholar] [CrossRef]
  24. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Anwar, S. Deep learning-based identification system of weeds and crops in strawberry and pea fields for a precision agriculture sprayer. Precis. Agric. 2021, 22, 1711–1727. [Google Scholar] [CrossRef]
  25. Kim, Y.H.; Park, K.R. MTS-CNN: Multi-task semantic segmentation-convolutional neural network for detecting crops and weeds. Comput. Electron. Agric. 2022, 199, 107146. [Google Scholar] [CrossRef]
  26. Deepa, S.N.; Rasi, D. FHGSO: Flower Henry gas solubility optimization integrated deep convolutional neural network for image classification. Appl. Intell. 2022, 53, 7278–7297. [Google Scholar] [CrossRef]
  27. Babu, V.S.; Ram, N.V. Deep Residual CNN with Contrast Limited Adaptive Histogram Equalization for Weed Detection in Soybean Crops. Trait. Du Signal 2022, 39, 717–722. [Google Scholar] [CrossRef]
  28. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images. Remote Sens. 2022, 14, 592. [Google Scholar] [CrossRef]
  29. Yu, H.; Che, M.; Yu, H.; Zhang, J. Development of Weed Detection Method in Soybean Fields Utilizing Improved DeepLabv3+ Platform. Agronomy 2022, 12, 2889. [Google Scholar] [CrossRef]
  30. Sun, Y.; Chen, Y.; Jin, X.; Yu, J.; Chen, Y. AI differentiation of bok choy seedlings from weeds. Fujian J. Agric. Sci. 2021, 36, 1484–1490. [Google Scholar] [CrossRef]
  31. Wu, Z.N.; Chen, Y.J.; Zhao, B.; Kang, X.B.; Ding, Y.Y. Review of Weed Detection Methods Based on Computer Vision. Sensors 2021, 21, 3647. [Google Scholar] [CrossRef] [PubMed]
  32. Xu, X.; Wang, L.; Shu, M.; Liang, X.; Ghafoor, A.Z.; Liu, Y.; Ma, Y.; Zhu, J. Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image. Remote Sens. 2022, 14, 5388. [Google Scholar] [CrossRef]
  33. Fan, K.-J.; Su, W.-H. Applications of Fluorescence Spectroscopy, RGB- and MultiSpectral Imaging for Quality Determinations of White Meat: A Review. Biosensors 2022, 12, 76. [Google Scholar] [CrossRef]
  34. Li, Y.; Al-Sarayreh, M.; Irie, K.; Hackell, D.; Bourdot, G.; Reis, M.M.; Ghamkhar, K. Identification of Weeds Based on Hyperspectral Imaging and Machine Learning. Front. Plant Sci. 2021, 11, 611622. [Google Scholar] [CrossRef]
  35. Diao, Z.; Yan, J.; He, Z.; Zhao, S.; Guo, P. Corn seedling recognition algorithm based on hyperspectral image and lightweight-3D-CNN. Comput. Electron. Agric. 2022, 201, 107343. [Google Scholar] [CrossRef]
  36. Dashti, H.; Glenn, N.F.; Ustin, S.; Mitchell, J.J.; Qi, Y.; Ilangakoon, N.T.; Flores, A.N.; Luis Silvan-Cardenas, J.; Zhao, K.; Spaete, L.P.; et al. Empirical Methods for Remote Sensing of Nitrogen in Drylands May Lead to Unreliable Interpretation of Ecosystem Function. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3993–4004. [Google Scholar] [CrossRef]
  37. Lou, Z.; Quan, L.; Sun, D.; Li, H.; Xia, F. Hyperspectral remote sensing to assess weed competitiveness in maize farmland ecosystems. Sci. Total Environ. 2022, 844, 157071. [Google Scholar] [CrossRef] [PubMed]
  38. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.-H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  39. Su, J.; Coombes, M.; Liu, C.; Zhu, Y.; Song, X.; Fang, S.; Guo, L.; Chen, W.H. Machine Learning-Based Crop Drought Mapping System by UAV Remote Sensing RGB Imagery. Unmanned Syst. 2020, 8, 71–83. [Google Scholar] [CrossRef]
  40. Amarasingam, N.; Hamilton, M.; Kelly, J.E.; Zheng, L.; Sandino, J.; Gonzalez, F.; Dehaan, R.L.; Cherry, H. Autonomous Detection of Mouse-Ear Hawkweed Using Drones, Multispectral Imagery and Supervised Machine Learning. Remote Sens. 2023, 15, 1633. [Google Scholar] [CrossRef]
  41. Lopez, L.O.; Ortega, G.; Aguera-Vega, F.; Carvajal-Ramirez, F.; Martinez-Carricondo, P.; Garzon, E.M. Multispectral Imaging for Weed Identification in Herbicides Testing. Informatica 2022, 33, 771–793. [Google Scholar] [CrossRef]
  42. Aguera-Vega, F.; Aguera-Puntas, M.; Aguera-Vega, J.; Martinez-Carricondo, P.; Carvajal-Ramirez, F. Multi-sensor imagery rectification and registration for herbicide testing. Measurement 2021, 175, 109049. [Google Scholar] [CrossRef]
  43. Allred, B.; Martinez, L.; Fessehazion, M.K.; Rouse, G.; Williamson, T.N.; Wishart, D.; Koganti, T.; Freeland, R.; Eash, N.; Batschelet, A.; et al. Overall results and key findings on the use of UAV visible-color, multispectral, and thermal infrared imagery to map agricultural drainage pipes. Agric. Water Manag. 2020, 232, 106036. [Google Scholar] [CrossRef]
  44. Eide, A.; Koparan, C.; Zhang, Y.; Ostlie, M.; Howatt, K.; Sun, X. UAV-Assisted Thermal Infrared and Multispectral Imaging of Weed Canopies for Glyphosate Resistance Detection. Remote Sens. 2021, 13, 4606. [Google Scholar] [CrossRef]
  45. Pineda, M.; Baron, M.; Perez-Bueno, M.L. Thermal Imaging for Plant Stress Detection and Phenotyping. Remote Sens. 2021, 13, 68. [Google Scholar] [CrossRef]
  46. Wang, X.; Pan, H.; Guo, K.; Yang, X.; Luo, S. The evolution of LiDAR and its application in high precision measurement. IOP Conf. Ser. Earth Environ. Sci. 2020, 502, 012008. [Google Scholar] [CrossRef]
  47. Moreno, H.; Valero, C.; Bengochea-Guevara, J.M.; Ribeiro, A.; Garrido-Izard, M.; Andujar, D. On-Ground Vineyard Reconstruction Using a LiDAR-Based Automated System. Sensors 2020, 20, 1102. [Google Scholar] [CrossRef] [PubMed]
  48. Sudars, K.; Jasko, J.; Namatevs, I.; Ozola, L.; Badaukis, N. Dataset of annotated food crops and weed images for robotic computer vision control. Data Brief 2020, 31, 105833. [Google Scholar] [CrossRef]
  49. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 2058. [Google Scholar] [CrossRef]
  50. Jiang, H.H.; Zhang, C.Y.; Qiao, Y.L.; Zhang, Z.; Zhang, W.J.; Song, C.Q. CNN feature based graph convolutional network for weed and crop recognition in smart farming. Comput. Electron. Agric. 2020, 174, 105450. [Google Scholar] [CrossRef]
  51. Sa, I.; Chen, Z.T.; Popovic, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. weedNet: Dense Semantic Weed Classification Using Multispectral Images and MAV for Smart Farming. IEEE Robot. Autom. Lett. 2018, 3, 588–595. [Google Scholar] [CrossRef]
  52. Binch, A.; Fox, C.W. Controlled comparison of machine vision algorithms for Rumex and Urtica detection in grassland. Comput. Electron. Agric. 2017, 140, 123–138. [Google Scholar] [CrossRef]
  53. Osorio, K.; Puerto, A.; Pedraza, C.; Jamaica, D.; Rodriguez, L. A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images. Agriengineering 2020, 2, 471–488. [Google Scholar] [CrossRef]
  54. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards weeds identification assistance through transfer learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
  55. Alam, M.S.; Alam, M.; Tufail, M.; Khan, M.U.; Guenes, A.; Salah, B.; Nasir, F.E.; Saleem, W.; Khan, M.T. TobSet: A New Tobacco Crop and Weeds Image Dataset and Its Utilization for Vision-Based Spraying by Agricultural Robots. Appl. Sci. 2022, 12, 1308. [Google Scholar] [CrossRef]
  56. Champ, J.; Mora-Fallas, A.; Goeau, H.; Mata-Montero, E.; Bonnet, P.; Joly, A. Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Appl. Plant Sci. 2020, 8, e11373. [Google Scholar] [CrossRef]
  57. Di Cicco, M.; Potena, C.; Grisetti, G.; Pretto, A. Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)/Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics, Vancouver, BC, Canada, 24–28 September 2017; pp. 5188–5195. [Google Scholar]
  58. Hasan, A.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  59. Wang, A.; Xu, Y.; Wei, X.; Cui, B. Semantic Segmentation of Crop and Weed using an Encoder-Decoder Network and Image Enhancement Method under Uncontrolled Outdoor Illumination. IEEE Access 2020, 8, 81724–81734. [Google Scholar] [CrossRef]
  60. Ramirez, W.; Achanccaray, P.; Mendoza, L.F.; Pacheco, M.A.C. Deep Convolutional Neural Networks For Weed Detection in Agricultural Crops Using Optical Aerial Images. In Proceedings of the IEEE Latin American GRSS and ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 21–26 March 2020; pp. 133–137. [Google Scholar]
  61. Vypirailenko, D.; Kiseleva, E.; Shadrin, D.; Pukalchik, M. Deep learning techniques for enhancement of weeds growth classification. In Proceedings of the IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Glasgow, UK, 17–20 May 2021. [Google Scholar]
  62. Gee, C.; Denimal, E. RGB Image-Derived Indicators for Spatial Assessment of the Impact of Broadleaf Weeds on Wheat Biomass. Remote Sens. 2020, 12, 2982. [Google Scholar] [CrossRef]
  63. Slaughter, D.C. The Biological Engineer: Sensing the Difference Between Crops and Weeds. In Automation: The Future of Weed Control in Cropping Systems; Young, S.L., Pierce, F.J., Eds.; Springer: Dordrecht, The Netherlands, 2014; pp. 71–95. [Google Scholar] [CrossRef]
  64. Al-Badri, A.H.; Ismail, N.A.; Al-Dulaimi, K.; Salman, G.A.; Khan, A.R.; Al-Sabaawi, A.; Salam, M.S.H. Classification of weed using machine learning techniques: A review-challenges, current and future potential techniques. J. Plant Dis. Prot. 2022, 129, 745–768. [Google Scholar] [CrossRef]
  65. Cimpoi, M.; Maji, S.; Kokkinos, I.; Vedaldi, A. Deep Filter Banks for Texture Recognition, Description, and Segmentation. Int. J. Comput. Vis. 2016, 118, 65–94. [Google Scholar] [CrossRef] [PubMed]
  66. Ashraf, T.; Khan, Y.N. Weed density classification in rice crop using computer vision. Comput. Electron. Agric. 2020, 175, 105590. [Google Scholar] [CrossRef]
  67. Ayalew, G.; Zaman, Q.U.; Schumann, A.W.; Percival, D.C.; Chang, Y. An investigation into the potential of Gabor wavelet features for scene classification in wild blueberry fields. Artif. Intell. Agric. 2021, 5, 72–81. [Google Scholar] [CrossRef]
  68. Zhang, L.; Zhang, Z.; Wu, C.; Sun, L. Segmentation algorithm for overlap recognition of seedling lettuce and weeds based on SVM and image blocking. Comput. Electron. Agric. 2022, 201. [Google Scholar] [CrossRef]
  69. Miao, R.; Yang, H.; Wu, J.; Liu, H. Weed identification of overlapping spinach leaves based on image sub-block and reconstruction. Trans. Chin. Soc. Agric. Eng. 2020, 36, 178–184. [Google Scholar]
  70. Vi Nguyen Thanh, L.; Ahderom, S.; Alameh, K. Performances of the LBP Based Algorithm over CNN Models for Detecting Crops and Weeds with Similar Morphologies. Sensors 2020, 20, 2193. [Google Scholar] [CrossRef]
  71. Raja, G.; Dev, K.; Philips, N.D.; Suhaib, S.A.M.; Deepakraj, M.; Ramasamy, R.K. DA-WDGN: Drone-Assisted Weed Detection using GLCM-M features and NDIRT indices. In Proceedings of the IEEE Conference on Computer Communications Workshops (IEEE INFOCOM), Vancouver, BC, Canada, 9–12 May 2021. [Google Scholar]
  72. Zaman, M.H.M.; Mustaza, S.M.; Ibrahim, M.F.; Zulkifley, M.A.; Mustafa, M.M. Weed Classification Based on Statistical Features from Gabor Transform Magnitude. In Proceedings of the International Conference on Decision Aid Sciences and Application (DASA), Sakheer, Bahrain, 7–8 December 2021. [Google Scholar]
  73. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
  74. Bailey, D.; Chang, Y.; Le Moan, S. Analysing Arbitrary Curves from the Line Hough Transform. J. Imaging 2020, 6, 26. [Google Scholar] [CrossRef] [PubMed]
  75. Teplyakov, L.; Kaymakov, K.; Shvets, E.; Nikolaev, D. Line detection via a lightweight CNN with a Hough Layer. In Proceedings of the 13th International Conference on Machine Vision, Rome, Italy, 2–6 November 2021. [Google Scholar]
  76. Qi, M.; Wang, Y.; Chen, Y.; Xin, H.; Xu, Y.; Meng, H.; Wang, A. Center detection algorithm for printed circuit board circular marks based on image space and parameter space. J. Electron. Imaging 2023, 32, 011002. [Google Scholar] [CrossRef]
  77. Islam, N.; Rashid, M.M.; Wibowo, S.; Xu, C.-Y.; Morshed, A.; Wasimi, S.A.; Moore, S.; Rahman, S.M. Early Weed Detection Using Image Processing and Machine Learning Techniques in an Australian Chilli Farm. Agriculture 2021, 11, 387. [Google Scholar] [CrossRef]
  78. Fawakherji, M.; Potena, C.; Pretto, A.; Bloisi, D.D.; Nardi, D. Multispectral Image Synthesis for Crop/Weed Segmentation in Precision Farming. Robot. Auton. Syst. 2021, 146, 103861. [Google Scholar] [CrossRef]
  79. Ustin, S.L.; Jacquemoud, S. How the Optical Properties of Leaves Modify the Absorption and Scattering of Energy and Enhance Leaf Functionality. Remote Sens. Plant Biodivers. 2020, 14, 349–384. [Google Scholar]
  80. Zhu, W.; Sun, Z.; Huang, Y.; Yang, T.; Li, J.; Zhu, K.; Zhang, J.; Yang, B.; Shao, C.; Peng, J.; et al. Optimization of multi-source UAV RS agro-monitoring schemes designed for field-scale crop phenotyping. Precis. Agric. 2021, 22, 1768–1802. [Google Scholar] [CrossRef]
  81. Calderon, R.; Montes-Borrego, M.; Landa, B.B.; Navas-Cortes, J.A.; Zarco-Tejada, P.J. Detection of downy mildew of opium poppy using high-resolution multispectral and thermal imagery acquired with an unmanned aerial vehicle. Precis. Agric. 2014, 15, 639–661. [Google Scholar] [CrossRef]
  82. Bellvert, J.; Zarco-Tejada, P.J.; Girona, J.; Fereres, E. Mapping crop water stress index in a ‘Pinot-noir’ vineyard: Comparing ground measurements with thermal remote sensing imagery from an unmanned aerial vehicle. Precis. Agric. 2014, 15, 361–376. [Google Scholar] [CrossRef]
  83. Sabat-Tomala, A.; Raczko, E.; Zagajewski, B. Comparison of Support Vector Machine and Random Forest Algorithms for Invasive and Expansive Species Classification Using Airborne Hyperspectral Data. Remote Sens. 2020, 12, 516. [Google Scholar] [CrossRef]
  84. Shen, Y.; Yin, Y.; Li, B.; Zhao, C.; Li, G. Detection of impurities in wheat using terahertz spectral imaging and convolutional neural networks. Comput. Electron. Agric. 2021, 181, 105931. [Google Scholar] [CrossRef]
  85. Guo, X.; Ge, Y.; Liu, F.; Yang, J. Identification of maize and wheat seedlings and weeds based on deep learning. Front. Earth Sci. 2023, 11, 1146558. [Google Scholar] [CrossRef]
  86. Wang, Y.; Zhang, X.; Ma, G.; Du, X.; Shaheen, N.; Mao, H. Recognition of weeds at asparagus fields using multi-feature fusion and backpropagation neural network. Int. J. Agric. Biol. Eng. 2021, 14, 190–198. [Google Scholar] [CrossRef]
  87. Tannouche, A.; Sbai, K.; Rahmoune, M.; Zoubir, A.; Agounoune, R.; Saadani, R.; Rahmani, A. A Fast and Efficient Shape Descriptor for an Advanced Weed Type Classification Approach. Int. J. Electr. Comput. Eng. 2016, 6, 1168–1175. [Google Scholar]
  88. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
  89. Zhuang, J.; Jin, X.; Chen, Y.; Meng, W.; Wang, Y.; Yu, J.; Muthukumar, B. Drought stress impact on the performance of deep convolutional neural networks for weed detection in Bahiagrass. Grass Forage Sci. 2023, 78, 214–223. [Google Scholar] [CrossRef]
  90. Li, D.; Shi, G.; Li, J.; Chen, Y.; Zhang, S.; Xiang, S.; Jin, S. PlantNet: A dual-function point cloud segmentation network for multiple plant species. Isprs J. Photogramm. Remote Sens. 2022, 184, 243–263. [Google Scholar] [CrossRef]
  91. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw 2015, 61, 85–117. [Google Scholar] [CrossRef]
  92. Zhu, Y.; Wang, M.; Yin, X.; Zhang, J.; Meijering, E.; Hu, J. Deep Learning in Diverse Intelligent Sensor Based Systems. Sensors 2023, 23, 62. [Google Scholar] [CrossRef]
  93. Garibaldi-Marquez, F.; Flores, G.; Mercado-Ravell, D.A.; Ramirez-Pedraza, A.; Valentin-Coronado, L.M. Weed Classification from Natural Corn Field-Multi-Plant Images Based on Shallow and Deep Learning. Sensors 2022, 22, 3021. [Google Scholar] [CrossRef]
  94. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.Q.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  95. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
  96. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Houlsby, N. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  97. Jiang, K.; Afzaal, U.; Lee, J. Transformer-Based Weed Segmentation for Grass Management. Sensors 2023, 23, 65. [Google Scholar] [CrossRef]
  98. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in Vision: A Survey. ACM Comput. Surv. 2022, 54, 200. [Google Scholar] [CrossRef]
  99. Tao, T.; Wei, X. A hybrid CNN-SVM classifier for weed recognition in winter rape field. Plant Methods 2022, 18, 29. [Google Scholar] [CrossRef]
  100. Zhang, H.; Wang, Z.; Guo, Y.; Ma, Y.; Cao, W.; Chen, D.; Yang, S.; Gao, R. Weed Detection in Peanut Fields Based on Machine Vision. Agriculture 2022, 12, 1541. [Google Scholar] [CrossRef]
  101. Jin, X.; Sun, Y.; Che, J.; Bagavathiannan, M.; Yu, J.; Chen, Y. A novel deep learning-based method for detection of weeds in vegetables. Pest Manag. Sci. 2022, 78, 1861–1869. [Google Scholar] [CrossRef]
  102. Abouzahir, S.; Sadik, M.; Sabir, E. Paper Bag-of-visual-words-augmented Histogram of Oriented Gradients for efficient weed detection. Biosyst. Eng. 2021, 202, 179–194. [Google Scholar] [CrossRef]
  103. Haq, M.A. CNN Based Automated Weed Detection System Using UAV Imagery. Comput. Syst. Sci. Eng. 2022, 42, 837–849. [Google Scholar] [CrossRef]
  104. Milioto, A.; Lottes, P.; Stachniss, C. Real-Time Blob-Wise Sugar Beets vs. Weeds Classification for Monitoring Fields Using Convolutional Neural Networks. In Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics, Bonn, Germany, 4–7 September 2017; pp. 41–48. [Google Scholar]
  105. Ong, P.; Teo, K.S.; Sia, C.K. UAV-based weed detection in Chinese cabbage using deep learning. Smart Agric. Technol. 2023, 4, 100181. [Google Scholar] [CrossRef]
  106. Quan, L.; Feng, H.; Li, Y.; Wang, Q.; Zhang, C.; Liu, J.; Yuan, Z. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R-CNN. Biosyst. Eng. 2019, 184, 1–23. [Google Scholar] [CrossRef]
  107. Sanchez, P.R.; Zhang, H. Evaluation of a CNN-Based Modular Precision Sprayer in Broadcast-Seeded Field. Sensors 2022, 22, 9723. [Google Scholar] [CrossRef]
  108. Zhang, W.H.; Hansen, M.F.; Volonakis, T.N.; Smith, M.; Smith, L.; Wilson, J.; Ralston, G.; Broadbent, L.; Wright, G. Broad-Leaf Weed Detection in Pasture. In Proceedings of the 3rd IEEE International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 101–105. [Google Scholar]
  109. McCool, C.; Perez, T.; Upcroft, B. Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics. IEEE Robot. Autom. Lett. 2017, 2, 1344–1351. [Google Scholar] [CrossRef]
  110. Asseng, S.; Asche, F. Future farms without farmers. Sci. Robot. 2019, 4, eaaw1875. [Google Scholar] [CrossRef]
  111. Wang, D.S.; Cao, W.J.; Zhang, F.; Li, Z.L.; Xu, S.; Wu, X.Y. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  112. Zhang, H.D.; Wang, L.Q.; Tian, T.; Yin, J.H. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  113. Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 1221. [Google Scholar] [CrossRef]
  114. Yuan, X.H.; Shi, J.F.; Gu, L.C. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert Syst. Appl. 2021, 169, 114417. [Google Scholar] [CrossRef]
  115. Liu, J.; Xiang, J.J.; Jin, Y.J.; Liu, R.H.; Yan, J.N.; Wang, L.Z. Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey. Remote Sens. 2021, 13, 4387. [Google Scholar] [CrossRef]
  116. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Iqbal, J.; Wasim, A. Real-time recognition of spraying area for UAV sprayers using a deep learning approach. PLoS ONE 2021, 16, e0249436. [Google Scholar] [CrossRef] [PubMed]
  117. De Castro, A.I.; Ehsani, R.; Ploetz, R.; Crane, J.H.; Abdulridha, J. Optimum spectral and geometric parameters for early detection of laurel wilt disease in avocado. Remote Sens. Environ. 2015, 171, 33–44. [Google Scholar] [CrossRef]
  118. Xie, C.Q.; Yang, C. A review on plant high-throughput phenotyping traits using UAV-based sensors. Comput. Electron. Agric. 2020, 178, 105731. [Google Scholar] [CrossRef]
  119. Allred, B.; Eash, N.; Freeland, R.; Martinez, L.; Wishart, D. Effective and efficient agricultural drainage pipe mapping with UAS thermal infrared imagery: A case study. Agric. Water Manag. 2018, 197, 132–137. [Google Scholar] [CrossRef]
  120. Guo, A.T.; Huang, W.J.; Dong, Y.Y.; Ye, H.C.; Ma, H.Q.; Liu, B.; Wu, W.B.; Ren, Y.; Ruan, C.; Geng, Y. Wheat Yellow Rust Detection Using UAV-Based Hyperspectral Technology. Remote Sens. 2021, 13, 123. [Google Scholar] [CrossRef]
  121. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  122. Sivakumar, A.N.; Modi, S.; Gasparino, M.V.; Ellis, C.; Velasquez, A.E.B.; Chowdhary, G.; Gupta, S. Learned Visual Navigation for Under-Canopy Agricultural Robots. In Proceedings of the Conference on Robotics—Science and Systems, Electr Network, Virtual, 12–16 July 2021. [Google Scholar]
  123. Subeesh, A.; Mehta, C.R. Automation and digitization of agriculture using artificial intelligence and internet of things. Artif. Intell. Agric. 2021, 5, 278–291. [Google Scholar] [CrossRef]
  124. Andreasen, C.; Scholle, K.; Saberi, M. Laser Weeding With Small Autonomous Vehicles: Friends or Foes? Front. Agron. 2022, 4, 841086. [Google Scholar] [CrossRef]
  125. Tran, D.; Schouteten, J.J.; Degieter, M.; Krupanek, J.; Jarosz, W.; Areta, A.; Emmi, L.; De Steur, H.; Gellynck, X. European stakeholders’ perspectives on implementation potential of precision weed control: The case of autonomous vehicles with laser treatment. Precis. Agric. 2023, 24, 2200–2222. [Google Scholar] [CrossRef]
  126. Hussain, A.; Fatima, H.S.; Zia, S.M.; Hasan, S.; Khurram, M.; Stricker, D.; Afzal, M.Z. Development of Cost-Effective and Easily Replicable Robust Weeding Machine-Premiering Precision Agriculture in Pakistan. Machines 2023, 11, 287. [Google Scholar] [CrossRef]
  127. Xu, S.Y.; Wu, J.J.; Zhu, L.; Li, W.H.; Wang, Y.T.; Wang, N. A novel monocular visual navigation method for cotton-picking robot based on horizontal spline segmentation. In Proceedings of the 9th International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR)—Automatic Target Recognition and Navigation, Enshi, China, 31 October–1 November 2015. [Google Scholar]
  128. Jia, W.K.; Zhang, Y.; Lian, J.; Zheng, Y.J.; Zhao, D.; Li, C.J. Apple harvesting robot under information technology: A review. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420925310. [Google Scholar] [CrossRef]
  129. Jiang, W.; Quan, L.Z.; Wei, G.Y.; Chang, C.; Geng, T.Y. A conceptual evaluation of a weed control method with post-damage application of herbicides: A composite intelligent intra-row weeding robot. Soil Tillage Res. 2023, 234, 105837. [Google Scholar] [CrossRef]
  130. Mohamed, E.S.; Belal, A.; Abd-Elmabod, S.K.; El-Shirbeny, M.A.; Gad, A.; Zahran, M.B. Smart farming for improving agricultural management. Egypt. J. Remote Sens. Space Sci. 2021, 24, 971–981. [Google Scholar] [CrossRef]
  131. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
  132. Jin, S.; Dai, H.; Peng, J.; He, Y.; Zhu, M.; Yu, W.; Li, Q. An Improved Mask R-CNN Method for Weed Segmentation. In Proceedings of the 17th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 16–19 December 2022. [Google Scholar]
Figure 1. General workflow of image processing-based weed detection.
Figure 1. General workflow of image processing-based weed detection.
Agronomy 14 00363 g001
Figure 2. Texture-based segmentation using Gabor filters (orientation between [0 and 135] degrees in steps of 45 degrees) [6].
Figure 2. Texture-based segmentation using Gabor filters (orientation between [0 and 135] degrees in steps of 45 degrees) [6].
Agronomy 14 00363 g002
Figure 3. From left to right: line detection in bean (a) and spinach (b) fields. Detected lines are in blue. In the spinach field, inter-row distance and the crop row orientation are not regular. The detected lines are mainly located in the center of the crop rows [17].
Figure 3. From left to right: line detection in bean (a) and spinach (b) fields. Detected lines are in blue. In the spinach field, inter-row distance and the crop row orientation are not regular. The detected lines are mainly located in the center of the crop rows [17].
Agronomy 14 00363 g003
Figure 4. Sample images of hawkweed flowersh based on spectral feature recognition. (a) Actual multispectral image; (b) Prediction result; (c) Prediction results are overlayed with actual image (EPSG:4326—WGS 84) [40].
Figure 4. Sample images of hawkweed flowersh based on spectral feature recognition. (a) Actual multispectral image; (b) Prediction result; (c) Prediction results are overlayed with actual image (EPSG:4326—WGS 84) [40].
Agronomy 14 00363 g004
Figure 5. Pixelated segmentation of green plant leaves [85].
Figure 5. Pixelated segmentation of green plant leaves [85].
Agronomy 14 00363 g005
Figure 6. Unmanned aerial vehicles used in agriculture. (a) Unmanned aerial vehicle (UAV) hyperspectral imaging system [120]. (b) Drone spraying of pesticides.
Figure 6. Unmanned aerial vehicles used in agriculture. (a) Unmanned aerial vehicle (UAV) hyperspectral imaging system [120]. (b) Drone spraying of pesticides.
Agronomy 14 00363 g006
Table 2. Effect of different image enhancements on image segmentation.
Table 2. Effect of different image enhancements on image segmentation.
CropMethodsEnhancementInput RepresentationMIoUReference
Sugar beetAn encoder-decoder deep learning networkHE
PS-AC
DPE
RGB92.75%
94.29%
93.50%
AICHEN WANG et al. [59]
OilseedAn encoder-decoder deep learning networkHE
PS-AC
DPE
RGB94.80%
95.80%
96.12%
AICHEN WANG et al. [59]
SoybeanDeepLabv3+
Swin+DeepLabv3+ Swin-DeepLab
Random rotation, Random flipping, Random cropping, Adding Gaussian noise, and Increasing contrastRGB88.59%
91.10%
91.53%
Yu, H., et al. [29]
SunflowerAn algorithm proposed by Lopez, L.O., et al [41]. U-Net
FPN
Perspective Deformity Correction ProgramMultispectral89%
90%
89%
Lopez, L.O., et al. [41]
HE—Histogram Equalization; PS-AC—PS Auto Contrast; DPE—Deep Photo Enhancer; RGB—red, green, blue; Swin-DeepLab—Hierarchical Vision Transformer for Semantic Segmentation.
Table 5. Classification of weeds and crops with regard to algorithms.
Table 5. Classification of weeds and crops with regard to algorithms.
MethodsCropWeedSensorAccuracyReference
Swin-DeepLabSoybeanGraminoid weeds such as Digitaria sanguinalis (L.) Scop and Setaria viridis (L.) Beauv; broadleaf weeds such as Chenopodium glaucum L., Acalypha australis L., and Amaranthus retroflexus L.RGB91.53%Yu, H. et al. [29]
lightweight-3D-CNNCrop seedlingsWeedHyperspectral>97.4%Diao, Z. et al. [35]
A combination of fine-tuned Densenet and Support Vector MachineTomato (Solanum lycopersicum L.) and Cotton (Gossypium hirsutum L.)Black nightshade (Solanum nigrum L.) and Velvetleaf (Abutilon theophrasti Medik.)RGB99.29%Espejo-Garcia, B. et al. [54]
VGG16, VGG19,
Xception
CornNLW, BLWRGB97.83%,
97.44%,
97.24%
Garibaldi-Marquez, F. et al. [93]
GA-SVMLettuceChenopodium serotinum, Polygonum lapathifoliumRGB87.55%Zhang, L. et al. [68]
VItMaize and WheatBlack-grass, Charlock, Cleaver, Common Chickweed, Common wheat, Fat Hen, Loose Silky-bent, Maize, Scentless Mayweed, Shepherds Purse, Small-flowered Cranesbill, and Sugar beet.RGB98.1%Guo, X. et al. [85]
AlexNet, GoogleNet, VGGFlorida pusleyBahiagrassRGB95%, 96%, 95%Zhuang, J. et al. [89]
PlantNetTobacco, Tomato, and SorghumMonocotyledonous weedHigh Precision 3D Laser95.04%
96.44%
98.03%
Li, D. et al. [90]
VGG-SVMWinter rape seedlingsRape seedlings associated weedsRGB92.1%Tao, T. and X. Wei [99]
YOLOv4-TinypeanutsPortulaca oleracea, Eleusine indica, Chenopodium album, Amaranth blitum, Abutilon theophrasti, and Calystegia.RGB96.7%Zhang, H. et al. [100]
U-NetSunflower(Chenopodium album L., Convolvulus arviensis L.; and Cyperus rotundus L.Multispectral90%Lopez, L.O. et al. [41]
YOLO-v3 CenterNet
Faster R-CNN
Bok choyWeedsRGB98.4%, 98.3%, 97.5%Xiaojun Jin et al. [101]
SVM, YOLOV3, Mask R-CNNLettuce CropsWeedsMultispectral and UAV88%, 94%, 94%Osorio, K. et al. [53]
MLWheatblackgrass weedsMultispectral and UAV93.8%Su, J. et al. [38]
SVM, KNN, AdaBoost and CNNRiceLeptochloa Chinensis, SedgesRGB and UAV89.75%, 85.58%, 90.25%, 92.41%Zhu, S. et al. [20]
BPNNSoybean, Sugar Beet and CarrotBroad-leaf, Grass, Pig-weed, Lambs-quarter, Hares-ear Mustard, Turnip Weed, Wild Carrot, Corsican MintRGB, UAV and BONIROB Robot96.6%,
97.7%,
93%
Abouzahir, S. et al. [102]
RF, SVM and KNNChilliUnwanted weed and Parasites within cropsRGB and UAV94%, 96%, 63%Islam, N. et al. [77]
Improved Faster R-CNNPea and StrawberryAnnual Goosegrass (Eleusine indica) weedsRGB and UAVaverage of 95.3%Khan, S. et al. [24]
RFBean and SpinachThistles and young Potato sproutsRGB and DJI Phantom 3 Pro drone96.99Bah, M.D. et al. [17]
CNNLVQSoybeanGrassy weeds and Broadleaf weedsRGB and UAV99.44%Haq, M.A. [103]
CNNSoybeanWeedsRGB+NIR and UAV99.66%Milioto, A. et al. [104]
CNNChinese cabbageWeedsRGB and UAV92.41%Ong, P. et al. [105]
VitBeet, Parsley and SpinachWeedsRGB and UAV>98.63%Reedha, R. et al. [28]
MobileNetV2Flax14 most common weedsRGB and SAMBot90%Du, Y. et al. [5]
SVMTobaccoWeedsRGB and A tractor-mounted boom sprayer96%Tufail, M. et al. [6]
YOLOXCorn seedlingsWeedsRGB92.45%Zhu, H.B. et al. [16]
Faster R-CNN and YOLOv5TobaccoBare soil and Weeds (that grow up in tobacco fields)RGB and Pesticide spraying robot98.43%
94.45%
Alam·, M.S. et al. [55]
Faster ReCNN with VGG1Maize seedlingWeedsIndustrial USB cameras and Field robot platform (FRP)98.2%Quan, L. et al. [106]
An encoder-decoder network with atrous separable convolutionSugar Beet and OilseedWeedRGB and BoniRob robot96.12%Wang, A. et al. [60]
ANN with 15 units in ensembleCarrotWeedsMultispectral and Bonirob 83.5%Lease, B.A. et al. [7]
cGANsunflower crops, Sugar beetWeedsMultispectral and BOSCH Bonirob farm robot94%Fawakherji, M. et al. [78]
CNNSoybeansWeedsRGB and Precision SprayerIn the experiment, spray volume was reduced by up to 48.89.Sanchez, P.R. and H. Zhang [107]
CNNGrassBroad-leaf weedHigh resolution camera and Quadbike96.88%Zhang, W.H. et al. [108]
lightweight DCNN Organic carrot WeedsRGB and AgBot II93.9%McCool, C. et al. [109]
SVM—Support vector machine; ANN—Artificial Neural Network; RF—Random Forest; ML—Machine Learning; cGAN—Conditional Adversarial Nets; CNN—Convolutional Neural Networks; U-Net—Convolutional Networks for Biomedical Image Segmentation; Faster R-CNN—Faster Region Convolutional Neural Network; YOLO—You Only Look Once; Vit—Vision Transformer; CNNLVQ—Convolutional Neural Network with Learning Vector Quantization; MobileNetV2—MobileNet Version 2; BPNN—Backpropagation Neural Network; Mask R-CNN—Mask Region-based Convolutional Neural Network; VGG—Visual Geometry Group; CenterNet—Objects as Points: CenterNet; KNN—K-Nearest Neighbors; AdaBoost—Adaptive Boosting; PlantNet—PlantNet Plant Identification; AlexNet—Imagenet Classification with Deep Convolutional Neural Networks; GoogleNet—Inception-v1; Xception—Extreme Inception; Swin-DeepLab—Hierarchical Vision Transformer for Semantic Segmentation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qu, H.-R.; Su, W.-H. Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review. Agronomy 2024, 14, 363. https://doi.org/10.3390/agronomy14020363

AMA Style

Qu H-R, Su W-H. Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review. Agronomy. 2024; 14(2):363. https://doi.org/10.3390/agronomy14020363

Chicago/Turabian Style

Qu, Hao-Ran, and Wen-Hao Su. 2024. "Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review" Agronomy 14, no. 2: 363. https://doi.org/10.3390/agronomy14020363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop