Next Article in Journal
Improved Mel Frequency Cepstral Coefficients for Compressors and Pumps Fault Diagnosis with Deep Learning Models
Next Article in Special Issue
Incorporating Multi-Temporal Remote Sensing and a Pixel-Based Deep Learning Classification Algorithm to Map Multiple-Crop Cultivated Areas
Previous Article in Journal
Pruning Quantized Unsupervised Meta-Learning DegradingNet Solution for Industrial Equipment and Semiconductor Process Anomaly Detection and Prediction
Previous Article in Special Issue
An IoT Transfer Learning-Based Service for the Health Status Monitoring of Grapevines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Harnessing Digital Twins for Agriculture 5.0: A Comparative Analysis of 3D Point Cloud Tools

by
Paula Catala-Roman
1,†,
Enrique A. Navarro
2,†,
Jaume Segura-Garcia
1,† and
Miguel Garcia-Pineda
1,*,†
1
Department of Computer Science, ETSE-UV, Universitat de València, Av. de la Universitat, s/n, 46100 Burjassot, Spain
2
IRTIC Institute, Universitat de València, Av. de la Universitat, s/n, 46100 Burjassot, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(5), 1709; https://doi.org/10.3390/app14051709
Submission received: 31 December 2023 / Revised: 6 February 2024 / Accepted: 16 February 2024 / Published: 20 February 2024
(This article belongs to the Special Issue Recent Advances in Precision Farming and Digital Agriculture)

Abstract

:
Digital twins are essential in Agriculture 5.0, providing an accurate digital representation of agricultural objects and processes, enabling data-driven decision-making, the simulation of future scenarios, and innovation for a more efficient and sustainable agriculture. The main objective of this article is to review and compare the main tools for the development of digital twins for Agriculture 5.0 applications using 3D point cloud models created from photogrammetry techniques. For this purpose, the most commonly used tools for the development of these 3D models are presented. As a methodological approach, a qualitative comparison of the main characteristics of these tools was carried out. Then, based on some images taken in an orange grove, a quality analysis of the 3D point cloud models obtained by each of the analyzed tools was carried out. We also obtained a synthetic quality index in order to have a way to categorize the different pieces of software. Finally, as a conclusion, we compared the performance of the different software tools and the point clouds obtained by considering objective metrics (from the 3D quality assessment) and qualitative metrics in the synthetic quality index. With this index, we found that OpenDroneMap was the best software in terms of quality-cost ratio. Also, the paper introduces the concept of Agriculture 6.0, exploring the integration of advancements from Agriculture 5.0 to envision the potential evolution of agricultural practices and technologies, considering their impact on social and economic aspects.

1. Introduction

Agriculture 4.0 is the integration of digital technologies into agricultural practices, aiming to enhance productivity, efficiency, sustainability, and decision-making [1]. It encompasses the use of sensors, drones, satellite imagery, data analytics, robotics, and other intelligent systems to gather and analyze real-time data about crops, soil conditions, and farm operations. This data-driven approach enables farmers to make informed decisions on irrigation, fertilization, pest control, and harvesting, optimizing resource use and reducing environmental impact.
The main benefits of Agriculture 4.0 are:
  • Enhanced productivity: data-driven decision-making optimizes inputs like water, fertilizers, and pesticides, leading to higher yields and better resource utilization.
  • Improved efficiency: automation and precision farming techniques reduce labor requirements and minimize waste, streamlining operations.
  • Sustainability: data-driven precision agriculture helps conserve water, reduce pesticide use, and minimize soil erosion, promoting sustainable practices.
  • Reduced risks: real-time monitoring and early detection of pests, diseases, and weather conditions enable farmers to take proactive action, minimizing losses
However, Agriculture 4.0 has several associated costs, like:
  • Initial investment: implementing digital technologies requires upfront investments in hardware, software, and expertise, which can be significant for smaller farms.
  • Skills gap: upskilling farmers and agricultural workers to handle these technologies is essential, and this may require training programs and resources.
  • Data management: organizing, analyzing, and interpreting the vast quantities of data generated by digital tools can be challenging and require specialized expertise.
  • Technology compatibility: ensuring a seamless integration of different technologies and data platforms can be complex and require careful planning.
Agriculture 5.0 builds on advances in precision agriculture (Agriculture 4.0) and takes digitization and automation to the next level. It is a futuristic and advanced vision of agriculture that seeks to integrate cutting-edge technologies and intelligent systems to drive efficiency, sustainability, and productivity in the agricultural sector [2].
Agriculture 5.0 uses technologies such as artificial intelligence (AI), machine learning, robotics, drones, the Internet of things (IoT), and sensor systems to collect and analyze data in real time [3]. These data are used to optimize farming operations, improve decision-making, and maximize crop yields. One of the key aspects of Agriculture 5.0 is the connectivity and interconnection of different agricultural devices and systems. Farmers can remotely monitor and control their crops, animals, and equipment through digital platforms. This allows for a more efficient management of resources, such as water and fertilizers, and helps prevent diseases and pests by detecting them early.
AI plays a vital role in Agriculture 5.0 by enabling machines and systems to learn, adapt, and make autonomous decisions [4]. Machine Learning algorithms help in analyzing vast quantities of data collected from various sources, such as weather patterns, soil conditions, crop health, and market trends. This information empowers farmers to make data-driven decisions, optimize resource utilization, and maximize yields.
Big Data analytics in Agriculture 5.0 helps in processing and analyzing large volumes of complex data to gain valuable insights. By combining data from multiple sources, such as satellite imagery, sensors, drones, and historical records, farmers can monitor crop growth, detect diseases and pests, and predict yield results [5]. This knowledge allows for precise interventions, such as targeted irrigation, optimized fertilization, and timely pest control, minimizing waste and reducing environmental impact.
The IoT enables the connectivity of devices and sensors in the agricultural ecosystem. Smart sensors embedded in the field, livestock, and machinery collect real-time data on soil moisture, temperature, humidity, animal behavior, and equipment performance. These data are transmitted over 5G networks, providing high-speed, low-latency communication crucial for real-time decision-making. Farmers can remotely monitor and control operations, receive alerts, and automate tasks, resulting in improved operational efficiency and reduced labor costs [6].
Furthermore, nowadays, digital twins (or DTs) are being widely used in various fields of our lives, such as industry, healthcare, transportation, logistics, etc. [7]. A DT is an accurate digital representation or replica of a physical object, process, or service. In Agriculture 5.0, a DT is important because it allows the optimization of the planning, monitoring, and control of the farm in real time, making data-driven decisions, simulating future scenarios, and fostering innovation [8]. It provides farmers with accurate information, helps them anticipate problems, maximize yields, and develop more efficient and sustainable solutions. In short, a DT is the key to driving efficiency and productivity in Agriculture 5.0.
In Nasirahmadi et al. [9], the authors highlight the significance of DTs in agriculture as a virtual representation of farms with the potential to enhance productivity, efficiency, and energy conservation. They offer an overview of the current state of the art in DT concepts and technologies within agricultural contexts. The review outlines a comprehensive framework for DTs encompassing soil management, irrigation, robotics, farm machinery, and post-harvest food processing. It delves into various aspects such as data recording (whose analysis allows the prediction of crop performance [10]), modeling (including AI and big data), simulation, analysis, prediction, and communication (e.g., IoT and wireless technologies) within the agricultural DT framework. The paper concludes by emphasizing the role of DT systems in supporting farmers through continuous real-time monitoring of the physical and virtual farm environments, ushering in the next generation of digitalization in agriculture. The paper by Cesco et al. [11] establishes a threefold agenda, proposing a farm-tailored framework for smart agriculture and DTs, exemplifying its implementation through a nitrogen fertilization case study, and outlining challenges and future potentials. The framework, structured around data collection, processing, analysis, and application, employs an infological approach. The case study demonstrates the framework’s utility in optimizing nitrogen fertilization by addressing spatial and temporal variations in land, soil, and crop factors. The study emphasizes the role of DTs in predictive analyses, ultimately benefiting agricultural sustainability and underscoring their potential for small-farm regions.
In refs. [8,12], the authors offer a literature search, in which the first appearance of the application of the concept of DT to the field of agriculture was in 2017. In these papers, they show several examples, such as a prototype of a DT of a greenhouse in which they use information collected by a robot, for example on soil nutrients and humidity. Another example of a DT is on simulations of a garden to know what the robot should do to ensure that the crop is in the ideal conditions to grow [13]. Also, a DT of a tomato crop is also presented in [14], whose developer team set up a 3D model of the crop in order to add real-time information from sensors, also with the aim of making the right decisions to ensure that the real crop grows in the best conditions. In ref. [15], it is shown a prototype of a DT on a smart farm, where the authors perform tests of their system with an example plant. Also, in future work, the authors indicate that such a system will be applied to larger projects, using AI (artificial intelligence), to enable sustainable development and improve food security (and traceability). Also, in [16], the application of AI to point clouds in agriculture helps to control cropping in vineyards.
The paper [17] introduces a novel decision support system tailored to urban farming production, emphasizing distinctions from traditional agribusiness requirements. The focus is on urban agriculture, particularly aquaponics, a method integrating plant and fish cultivation in a water-efficient cycle. The study employs a cyber-physical implementation of aquaponics, enhanced by a digital twin system and machine learning for adaptive capabilities. Empirical results from a three-month trial showcase the effectiveness of data-driven decision analytics and a digital twin model in planning aquaponic system production. Additionally, the article proposes a modeling framework for large-scale urban agriculture ecosystems, forming the foundation for a decision support system that coordinates activities among independent producers to achieve collective goals. Another work related to aquaponics is [18], where the authors propose and deploy a whole efficient system for the feeding cycle in fish farms and the generation of plant nutrients. This system has proven to provide good aquaponics life-cycle monitoring metrics with different communication technologies and has also allowed a reduction in energy consumption to make the system more sustainable.
Another work based on smart farming [19] is about the design and implementation of a farmer’s digital twin, in which the authors have wearable sensors that “are used to acquire real-time data on bone rotation from a farmer’s body”, to control the farmer’s activity.
Agrilora [20] is a project of a digital twin for smart agriculture, with the objective of increasing crop yields by addressing the problem of high maintenance and hardware costs, deploying wireless sensors in the field to detect plant diseases.
Many of the works dealing with this DT idea are conceptual, such as the one developed in [21], where the idea of a service/product for agriculture is shown, where artificial intelligence and a DT are used to make decisions for the correct management of a greenhouse. The DT has the function of controlling the environmental conditions of the greenhouse, such as humidity and C O 2 level, as well as the possibility to control the windows and fans of the greenhouse itself.
Nowadays, drones have become an invaluable source of data for activities such as inspection, surveillance, mapping, and 3D modeling [22]. Through a conventional photogrammetric process, it is possible to obtain three-dimensional results, such as digital surface models or digital terrain models (DSM/DTM), elevation contours, 3D textured models, and vector data, among others, in a highly automated manner. These data can be used in a wide variety of situations and scenarios.
The use of photogrammetry allows the development of 3D point cloud maps. Through this technique, in [23], an unsupervised method of vineyard detection and feature extraction from 3D point cloud maps is presented. The proposed method allows the automatic generation of maps of the regions of land covered by vineyards and, in addition, provides information on the local orientation of the vine rows and the spacing between rows, spatially organized in maps. Another example is [24], where an alternative approach is presented to estimate the correct time of corn harvest; the proposed method focuses on the relationship between the maturity data obtained by photogrammetry and the parameters produced by the chemical analysis of maize. Another work where a photogrammetric approach is developed is [25]. In that paper, the authors develop an automatic system to measure the roughness of the ground surface from images taken on the ground with a simple digital camera, without geometric restrictions. The accuracy of the system was determined on artificial models built with polystyrene, whose position and elevation accuracies were approximately 1.5 mm, while the error in surface estimation was less than 0.76% of the site surface. These results show that two roughness indices, the surface tortuosity index and the mean value of the height, are the most effective for discriminating the levels of tillage of the agricultural soil. In [26], the authors also use photogrammetry but for monitoring fruits at different growth stages. In this case, they estimate automatically the size of an “in-field” apple, generating a point cloud “using structure-from-motion (SfM) and multi-view stereo (MVS)” and estimating the size of the fruit. The downside of the job is the long processing time.
Most of the work found is conceptual, although some prototypes have been tested. Throughout the search, it was observed that none of the articles talks in detail about the tools/technologies they intend to use when developing the DT from photogrammetric techniques. Due to the scarcity of this type of study, the idea arose to write an article in which a comparison is offered of several of these useful tools for the creation of DTs using 3D point clouds obtained from photogrammetric techniques and oriented toward digital agriculture.
Here, the ideas of the generation and use of DTs help us to elaborate the 3D models with photogrammetric techniques. These models serve as a framework to enable the collection of agricultural information with the IoT and detect problems with AI techniques.
The goal of this paper is twofold: firstly, to evaluate the efficacy of point clouds in the creation of digital twins for agricultural applications, utilizing photogrammetric techniques with both RGB and multispectral images. Secondly, to conduct a thorough review and comparative analysis of tools dedicated to developing DTs for Agriculture 5.0 applications, leveraging 3D point clouds. Furthermore, the paper explores a forward-looking concept, Agriculture 6.0, aiming to integrate the advancements from Agriculture 5.0. This new paradigm seeks to harness the collective innovations in the field, contemplating how they can synergistically enhance various social and economic aspects. The discussion on Agriculture 6.0 serves as a forward-thinking exploration into the potential evolution of agricultural practices and technologies.
To accomplish this aim, after introducing the concepts of Agriculture 5.0 and DT, along with articulating the core objective of this article, the subsequent content is systematically organized as follows. In Section 2, an outline of the potential advantages inherent in utilizing this technology within the context of Agriculture 5.0 is presented to describe the methodological approach. Also, an exposition of contemporary tools applicable to DT development is provided to describe the materials, elucidating their respective contributions to 3D modeling. Subsequently, these tools are subjected to testing and a comparative analysis in Section 3, culminating in the selection of the optimal candidate through the determination of a synthetic quality metric. Lastly, Section 4 engages in a discourse on the conclusions drawn from the study and delineates potential pathways for prospective research endeavors, including a new concept called Agriculture 6.0, which is introduced in order to take a step beyond Agriculture 5.0 as we know it today.

2. Materials and Methods

In order to describe the methodology used in this study, a description of photogrammetric techniques and tools is described in the following subsections.

2.1. Photogrammetry to Generate Digital Twins in Agriculture 5.0

Photogrammetry is a technique that is used to measure and create three-dimensional models of objects and environments by analyzing images [27]. It consists of the process of capturing and analyzing photographs from different angles and using the visual and geometric information contained in those images to reconstruct the shape, position, and scale of objects in three-dimensional space. Photogrammetry is based on the principle of triangulation, which involves finding common points in different images and calculating their positions in 3D space using the geometry and parallax of images captured from different perspectives. These common points are called control points and can be identifiable visual points on images or artificial markers placed on objects.
The photogrammetry process generally involves several stages, which may include:
  • Image acquisition: photographs of the object or area of interest are captured from different angles and positions. It is important to have good image coverage to obtain an accurate reconstruction.
  • Image orientation: The position and relative orientation of each image are determined in relation to the others. This is accomplished by identifying common control points in the images and using matching and adjustment techniques to estimate orientation parameters.
  • Extraction of characteristic points: Key points and distinctive features are identified in the images. These points are used to track and establish correspondences between different images.
  • Matching and triangulation: correspondences are established between the characteristic points in the different images, and triangulation is performed to calculate the 3D positions of the points in space.
  • Three-dimensional model generation: A three-dimensional model is created from the calculated 3D points. Depending on the required accuracy and the desired level of detail, polygonal mesh models, dense point clouds, or surface models can be generated.
In the context of agriculture, photogrammetry is used to generate DTs, which are accurate virtual representations of crops, land, and agricultural structures [28,29]. These DTs provide detailed, real-time information on the status of crops and the agricultural environment, helping farmers to make decisions and optimize their farming practices [30]. The use of photogrammetry and DTs in agriculture provides a number of benefits, such as:
  • Crop monitoring: Generating DT from images allows farmers to obtain up-to-date information on crop growth and condition. They can detect areas with water stress, diseases, or pests, and take preventive or corrective measures in a timely manner.
  • Land planning and management: A DT makes it easier to plan plantings, design irrigation systems, and analyze soil characteristics. Farmers can evaluate topography, slope, sun exposure, and other factors to make informed decisions about land preparation and crop distribution.
  • Optimization of agricultural inputs: By generating DTs, farmers can accurately monitor crop health and apply agricultural inputs (such as fertilizers or pesticides) in a localized and targeted manner. This reduces input wastage and minimizes environmental impact.
  • Yield analysis and crop forecasting: A DT makes it possible to keep track of crop growth and yield over time. Farmers can assess plant health, estimate expected yields, and plan to harvest more efficiently.
Finally, DT technology serves as a powerful innovation driver within the agricultural sector. It empowers farmers to explore, test, and refine new technologies, practices, and business models within a virtual environment before implementing them in the real world. This approach fosters a culture of experimentation, ultimately leading to the development of more efficient, sustainable, and profitable solutions for the agricultural industry. A DT, in this context, emerges as a catalyst for progress and positive change.

2.2. Tools for the Development of Digital Twins in Agriculture 5.0

In this section, we conducted an extensive survey of prevalent tools employed for 3D model development through photogrammetry. These tools serve as the foundational step for subsequent applications in DT technology within the field of agriculture. We considered a diverse set of both licensed and open-source software options, as illustrated in Figure 1. This comprehensive approach allowed us to evaluate a broad spectrum of tools, catering to various needs and preferences.
  • Agisoft Metashape (Agisoft Metashape, available at: https://www.agisoft.com (accessed on 17 February 2024)) allows the generation of high-quality 3D models from photographs using advanced structure-from-motion and dense point-matching algorithms. With this tool, you can align images, generate dense point clouds, create polygonal meshes, and apply realistic textures. Agisoft Metashape is used in various industries, such as architecture, archaeology, and agriculture, to obtain accurate and visually appealing models (see Figure 2a).
  • Pix4d Mapper (Pix4d Mapper, available at: https://www.pix4d.com/product/pix4dmapper-photogrammetry-software/ (accessed on 17 February 2024) offers a specialized photogrammetry software package. This tool allows us to generate 3D models and maps from images taken by drone cameras, and its main applications include precision agriculture and topography, as it allows us to generate vegetation index maps to assist in agricultural management and detailed 3D terrain models (see Figure 2b). Pix4d can be used to level and smooth digital surfaces, as well as automatically classify point clouds.
  • OpenDroneMap (OpenDroneMap, available at: https://www.opendronemap.org (accessed on 17 February 2024) is an open-source software for processing drone imagery and generating 3D models, maps, and orthophotos (see Figure 2c). Among the different applications of ODM (OpenDroneMap), there is precision agriculture, since, in addition to the creation of dense and high-resolution point clouds of crops, it also offers an R package that allows one to obtain information on how these crops are doing (vegetation index, detection of poor plant condition, etc.).
  • DJI Terra (DJI Terra, available at: https://enterprise.dji.com/es/dji-terra (accessed on 17 February 2024) is a DJI-developed platform that allows the construction of 3D models from photogrammetry and drone images, allowing the transformation of physical locations into digital ones (see Figure 2d). It also offers detailed mission planning for automatic flights, linking it with the 3D modeling function. Applications include mapping and surveying, precision agriculture, and disaster management/emergency response.
  • Colmap (Colmap, available at: https://demuc.de/colmap/ (accessed on 17 February 2024) is an open-source tool for 3D model reconstruction from ordered or unordered images. It has several applications such as in photogrammetry or virtual reality. It is a general-purpose, end-to-end image-based 3D reconstruction pipeline with a graphical and command-line interface.
  • Meshroom (Meshroom, available at: https://alicevision.org/#meshroom (accessed on 17 February 2024) is an open-source software based on the AliceVision 3D reconstruction framework. It offers photogrammetry options, allowing the construction of detailed point cloud-based models from images or videos from different angles (see Figure 2e). It can have applications in architecture, design, and video games, among others.
  • Insight3d (Insight3d, available at: https://insight3d.sourceforge.net (accessed on 17 February 2024) is an open-source software that is based on 3D modeling of photographs taken from different angles, based on the different points it finds in common between these images. This tool does not allow you to process more than a certain number of photos, so it may not be as useful for large projects.
  • Micmac (Micmac, available at: https://micmac.ensg.eu/index.php/Accueil (accessed on 17 February 2024) an open-source tool used for 3D reconstruction, orthophotos, and depth maps in different fields such as cartography, industry, forestry, or archaeology, among others. It is developed at the IGN (French National Geographic Institute) and the ENSG (French National School of Geographic Sciences).
  • Regard3D (Regard3D, available at: https://www.regard3d.org (accessed on 17 February 2024) is a tool capable of transforming photos of an object from different angles into a 3D model using photogrammetry techniques, generating a three-dimensional representation of the exact object (see Figure 2f). It allows camera calibration, triangulation, creation, and texturing of 3D models. It is open-source software and is often used in fields such as cartography, archaeology, and virtual reality.
  • RealityCapture (RealityCapture, available at: https://www.capturingreality.com (accessed on 17 February 2024) is a photogrammetry software for creating virtual reality scenes, textured 3D meshes, orthographic projections, georeferenced maps from images and/or laser scans (see Figure 2g). It is used in various fields, like the others, in cartography for the creation of maps, in video games, in virtual reality, archaeology, and geology. On its official website, you can see several examples of interesting projects, such as scans of entire human bodies or historical buildings.
  • 3DF Zephyr (3DF Zephyr, available at: https://www.3dflow.net/3df-zephyr-photogrammetry-software/ (accessed on 17 February 2024) is the photogrammetry software solution from 3Dflow. It is specialized in the automatic reconstruction of 3D models from photographs and scan data (see Figure 2h).
Of the 11 tools presented, from our image datasets (124 images with a resolution of 5280 × 3956), we were able to successfully obtain 3D models in only 8 of them. The remaining three tools had the following limitations:
  • Insight3d: this software, although functional for generating models from a small number of photos, is not suitable for processing large datasets.
  • Micmac: we encountered errors when trying to process our dataset with Micmac, mainly due to it being a very old tool.
  • Colmap: Colmap created very sparse point clouds and did not provide meaningful visual representations.
These findings highlight the importance of tool selection based on the specific requirements and scale of a project, as not all tools are equally suitable for all applications.

3. Results and Discussion: Qualitative and Quantitative Tool Comparison

In this section, we perform a comparative analysis of the tools introduced in the previous section. We begin with a qualitative assessment of the eight selected tools. Subsequently, we delve into an objective evaluation of the quality of 3D models derived from the photogrammetry techniques integrated into each of these software applications.

3.1. Qualitative Analysis

Table 1 shows the qualitative analysis carried out. The first element analyzed was the hardware requirements. In this regard, DJI Terra required the most RAM (32 GB), followed by Agisoft Metashape. Other software applications such as Pix4dMapper, Meshroom, and 3DF Zephyr recommended 32 GB of RAM for proper operation, although they could work with 16 GB.
The rest of the tools could work properly if they had between 8 and 16 GB of RAM. On a interesting note, ODM was the tool requiring the least RAM, since it could work with 4 GB. The next aspect to analyze was the type of license required. In this case, we had four open-source licensed tools, which were ODM, Meshroom, Regard3D, and 3DF Zephyr, while the other four tools had a paid license with various types of plans, some of them costing more than EUR 3000.
Another interesting aspect is the availability of an API; this allows the possibility to develop new functionalities in an easier way. In this case, the tools that had this feature were Agisoft Metashape, Pix4dMapper, ODM, and Reality Capture. All the tools analyzed except Regard3D and RealityCapture had the ability to process multispectral images, a very interesting aspect in the field of digital agriculture since there are many indicators (normalized reflectance index or NRI, normalized difference vegetation index or NDVI, etc.) that are obtained from these types of images.
The georeferencing of the models, generated by each of the analyzed applications, is a very important feature when we are talking about digital agriculture. This quality allows us to locate the digital twins on a map in an unequivocal way. All the applications, except Meshroom and Regard3D, allowed this feature. Linked to this, we have elevation models. Elevation models are a digital representation of the land surface. These can help farmers understand the topography of their land, which can be useful for planning water management, fertilizer application, and other aspects of farming. Meshroom and Regard3D were the only ones that did not have this feature. In addition, we included in this analysis the option of distance measurements on the generated 3D model. This aspect was supported by most software applications, except for Meshroom and Regard3D.
Finally, the last feature analyzed was the possibility of distributed processing; in this case, ODM and Agisoft Metashape allowed this option. This can be an important aspect if you want to speed up processes and generate digital twins automatically after each flight.

3.2. Quantitative Analysis

To perform the quantitative analysis of the tools discussed throughout the article, we created a 3D model with a point cloud obtained from the images captured by the DJI Mavic 3M (DJI Mavic 3M, more information: https://ag.dji.com/es/mavic-3-m (accessed on 17 February 2024) drone in some fields of El Puig, Valencia. To generate this point cloud, a dataset with 142 images was obtained with a series of flights. Figure 3 shows an orthophoto of the flight.
The first parameter that was analyzed was the processing time of photogrammetry to create a 3D point cloud model from the 142 images. These times depend on the characteristics of the machine on which the tools are installed. In our case, all of them were installed on the same machine. The features of that machine were 12th Gen Intel(R) Core(TM) i7-12700F CPU @2.10 GHz, a 25 MB cache memory, 2 × 16 GB DIMM DDR4 3200 RAM, a WD Blue SN570 SSD 1 TB M.2 NVMe drive, Windows 11 ×64-bit operating system.
In terms of the time used by each tool to generate the point cloud, the fastest were DJI Terra, RealityCapture, and ODM; at 14, 20, and 25 min, respectively. Agisoft Metashape, Pix4dMapper, and 3df Zephyr took about 40 min. Meshroom and Regard3d took more than an hour. Table 2 shows the number of points in the pointclouds obtained by the different software studied in this paper.
To perform the quality study, each model generated by each analyzed tool was saved in the “.ply” format. These 3D point cloud models were evaluated using the NR-3DQA [31] tool available on Github (Accessed on 10 November 2023: https://github.com/zzc-1998/NR-3DQA/tree/main).
Following the guidelines of [31], we extracted the following geometric features from their tool’s assortment of metrics, for the objective assessment of point cloud quality:
  • Curvature index (ranges from zero to one): This measurement assesses the degree to which a curve differs from a straight line, commonly indicating roughness or smoothness. Greater values indicate improved quality for the same model.
  • Anisotropy index (ranges from zero to one): It is the measurement of variations in geometric properties in different directions. Lower values indicate higher quality for the same model.
  • Linearity index (ranges from zero to one): It measures the level of similarity to a straight line. In this context, higher values indicate higher quality for the identical model.
  • Flatness index (ranges from zero to one): This parameter measures how closely a surface approximates a flat plane. A low value indicates higher quality for a given model.
  • Sphericity index (ranges from zero to one): It measures how closely an object’s shape resembles that of a perfect sphere. Higher values indicate greater quality for the same model.
Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 show the mean values and the standard deviation of the selected metrics, i.e., curvature, anisotropy, linearity, flatness, and sphericity, for each 3D model obtained with the tools analyzed in this work.
When examining the curvature metric (see Figure 4), our evaluation indicated that Agisoft stood out as the top-performing software with a mean value of 0.11, with DJI Terra (0.059) and ODM (0.051) closely following. The remaining analyzed software tools demonstrated quite similar performances in this regard with values near or smaller than 0.04.
A similar behavior occurred when we analyzed the anisotropy in Figure 5. Agisoft with a value of 0.78 and DJI Terra and ODM (with values of 0.89 and 0.9, respectively) had lower values than the other tools analyzed, but their standard deviation was lower. The rest of the tools had similar values, except Meshroom, which had a higher standard deviation.
Figure 6 shows the mean linearity value and standard deviation for each tool evaluated. In this case, all the tools obtained similar values close to 0.4, except for RealityCapture and Agisoft Metashape, which had a mean value of 0.54 and 0.47, respectively. Therefore, they were the best ones in this metric, and 3DF Zephyr, which obtains a mean value of 0.45, was the worst.
In the case of the flatness metric analysis presented in Figure 7, we can observe that the software with the best performance was Agisoft (0.31), followed by RealityCapture (0.38), and DJI Terra (0.42); the rest of the analyzed tools had similar performance, obtaining a flatness value greater than 0.5. When analyzing the deviation, we see that a similar range was obtained by all the tools.
Finally, if we analyze sphericity (see Figure 8, we see that the behavior observed in the rest of the variables previously analyzed is repeated. Agisoft was the software with the best score (0.21) in this metric, followed by DJI Terra (0.20) and ODM (0.095). After that, the rest of the tools had values below 0.07.
In Figure 2, we can see different captures of the 3D models generated by each tool from images captured by the DJI Mavic 3M drone at the selected location (El Puig, Valencia). Figure 9 shows a more enlarged view, where the details and differences between each software tool can be better perceived.

3.3. Design of a Synthetic Quality Metric for Software Selection

In order to establish a unique metric for software evaluation, a synthetic metric was designed to qualify and model the quality of the software.
Taking into account the mean values and standard deviations of the metrics previously evaluated (shown in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8), we obtained the coefficient of variation as Δ m e t r i c i / m e t r i c i ¯ and designed the synthetic metric as a function of this coefficient of variation (CV) for each metric, considered as the standard deviation ( Δ m e t r i c i ) divided by the mean value ( m e t r i c i ¯ ). This is shown in Equation (1):
Δ x x = f ( Δ S p h S p h ¯ , Δ F l a P l a ¯ , Δ L i n L i n ¯ , Δ A n i A n i ¯ , Δ C u r C u r ¯ )
where S p h stands for sphericity, P l a stands for planarity (or flatness), L i n stands for linearity, A n i stands for anisotropy, and C u r is curvature.
The results of the relative values for each metric are in Table 3. Figure 10 shows the correlation among the CVs of the point cloud quality metrics. Here, we can see that anisotropy is one of the outlier metrics in this set.
We propose the design of the synthetic metric as a multilinear composed variable in the form exposed in Equation (1). In a first approach, we used the summation of all of these CV values. Furthermore, we used Table 1 considering as weighting factors whether the software did not have an API (85%) and whether it was not free software (85%). After using this procedure with the coefficients of variation, we developed a multilinear model taking the synthetic quality (SynthQ) metric as the dependent variable. In RStudio, we used the linear modeling (lm) function to compute this multilinear model. The result of that model is shown in Equation (2). Furthermore, Table 4 shows a summary of the whole multilinear regression study.
S y n t h Q = 1.243 + 1.450 · S p h 0.241 · P l a 0.141 · C u r
As indicated in the explanation of the features, we can see that the Agisoft Metashape tool was the one that provided the best quality (as it provided a high number of points in Table 2), followed by DJI Terra and finally Pix4D Mapper, which was the one that obtained the worst result on that test. In this case, all these options required a license fee and here, we applied a payload. With our model, OpenDroneMap (ODM) appeared as the best option in terms of cost-benefit relationship, as shown in Table 5.
After conducting a comprehensive review of related works, we identified only one relevant paper [32]. This paper initiated a comparison of tools for agronomy point cloud representation, providing a foundational reference point for our research. To extend beyond the existing state of the art, we expanded and enriched this comparative analysis. Unlike the previous work, we incorporated additional tools and introduced quality metrics for a more nuanced evaluation of point clouds.
Our dataset, acquired using a DJI Mavic 3M drone in the agricultural fields of El Puig, Valencia, Spain, offers a rich foundation for a meticulous assessment of software tools. This dataset, capturing the complexities of real-world agricultural scenarios, serves as a valuable resource for a more detailed and robust evaluation.
Expanding the toolset, we introduced options like 3DF Zephyr and RealityCapture, while omitting others such as Ensomosaic and MicMac. Exclusions were guided by practical considerations, as rendering challenges were encountered, potentially attributed to limitations posed by the available number of photographs. Consequently, our evaluation encompassed 11 tools, with reliable and validated results obtained from 8 of them.
In comparison to the study in [32], our use case showcased a broader spectrum of tools, presenting image captures from all software tools compatible with our dataset. Additionally, our research introduced a meticulously calibrated quality analysis of point clouds, exceeding the scope of the prior study. This ensured a more comprehensive understanding of the capabilities and limitations of the considered software tools in the context of agricultural point cloud representation. Finally, we designed a synthetic metric to qualify and model the quality of the software, taking into account the mean values and standard deviations of the metrics analyzed in Section 3.3.

4. Conclusions

In this paper, we explored the advantages of integrating digital agricultural models, particularly through photogrammetry techniques. We conducted an extensive study to assess the benefits of utilizing photogrammetry for the creation of digital twins (DTs) in the agricultural domain. Subsequently, we performed both qualitative and quantitative analyses of the four most commonly used tools for implementing these photogrammetric techniques and generating 3D models.
The quantitative assessment was conducted in terms of the evaluation of the different quality metrics, such as curvature, anisotropy, linearity, flatness, and sphericity. The correlation of the coefficients of variation for every piece of software within each quality metric allowed us to determine that a model with three of these metrics (i.e., sphericity, planarity/flatness and curvature) was optimum for the generation of a new synthetic metric. Also, the qualitative assessment of the different tools enabled the evaluation of the main characteristics that were of interest to us (i.e., free software, availability of API, georeferencing). These quantitative and qualitative metrics were used to model a synthetic quality metric which involved all the aforementioned aspects. The evaluation of the synthetic metric allowed us to categorize the different software tools with the inclusion of the three qualitative aspects, which modulated the synthetic quality model, weighting these aspects up to a certain level (here 85%) if they were not available in the software. The evaluation ranking identified OpenDroneMap as the best option considering that it is open-source, georeferenced, and has an API to process the point clouds.
Our comprehensive evaluation of the study leads us to conclude that the OpenDroneMap (ODM) tool presents the best quality-to-features ratio, and it is the software that shows the best cost-benefit relationship. Notably, it offers high-quality results, is an open-source tool with continuous improvements, and boasts faster computation times. Furthermore, the tool’s versatile features make it adaptable for a wide range of projects in the agricultural sector.
Agriculture has always been an essential part of mankind, providing food, livelihoods, and contributing to socio-economic development [33]. The relationship between Agriculture 5.0 and society is multifaceted and encompasses various aspects. Beyond Agriculture 5.0, there is a strong connection between agriculture and society, this union between society and the new era of digital agriculture could be called Agriculture 6.0.
In Agriculture 6.0, the integration of DTs will improve education and knowledge exchange. DTs serve as virtual representations of agricultural systems, enabling real-time data exchange and decision-making. Using machine learning and sensor data, DTs can gather information about physical models and create accurate virtual representations. This facilitates knowledge transfer among farmers, researchers, and stakeholders, allowing them to better understand complex agricultural processes. Farmers can access and analyze real-time data from DTs, enabling them to make informed decisions and optimize their farming practices. In addition, DTs can serve as educational tools, providing interactive and immersive experiences for students and professionals to learn about sustainable farming techniques and resource management.
However, the farmer’s participation in these actions for crop improvement through technology is not always uniform. Here, the critical point is always the cost and financial commitment of the agricultural community [2]. They always appreciate systems without any cost, but this is not always a possibility. They are usually reluctant to invest in research and automated new technologies, as their knowledge on maintaining the systems is usually low.
To sum up, Agriculture 6.0 can facilitate knowledge sharing and capacity building through digital platforms, enabling farmers to access information, best practices, and expert advice. This can enhance agricultural productivity, improve farming techniques, and empower farmers with the necessary skills and knowledge to adapt to changing agricultural landscapes, but the involvement and commitment of all the social network, and even the agricultural community is of major importance for the successful implementation of this type of technology.

Author Contributions

Conceptualization, M.G.-P., E.A.N. and J.S.-G.; methodology, M.G.-P. and P.C.-R.; software, M.G.-P., J.S.-G. and P.C.-R.; validation, J.S.-G., P.C.-R. and M.G.-P.; formal analysis, M.G.-P. and E.A.N.; investigation, P.C.-R.; resources, E.A.N.; data curation, P.C.-R.; writing—original draft preparation, P.C.-R. and M.G.-P.; writing—review and editing, J.S.-G. and M.G.-P.; supervision, M.G.-P. and J.S.-G.; project administration, J.S.-G. and M.G.-P.; funding acquisition, M.G.-P. and J.S.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Ministry of Science and Innovation/Spanish Research Agency (MCIN/AEI) within the project Agriculture 6.0 with reference TED2021-131040B-C33, funded by MCIN/AEI/10.13039/501100011033 and by the European Union “NextGenerationEU”/PRTR. It was also funded by the Spanish Ministry of Universities with grant PRX22/000503.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the technical staff from the IDS company for their support on the drone flights.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Araújo, S.O.; Peres, R.S.; Barata, J.; Lidon, F.; Ramalho, J.C. Characterising the Agriculture 4.0 Landscape—Emerging Trends, Challenges and Opportunities. Agronomy 2021, 11, 667. [Google Scholar] [CrossRef]
  2. Ahmad, L.; Nabi, F. Agriculture 5.0: Artificial Intelligence, IoT and Machine Learning; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar] [CrossRef]
  3. Saiz-Rubio, V.; Rovira-Más, F. From Smart Farming towards Agriculture 5.0: A Review on Crop Data Management. Agronomy 2020, 10, 207. [Google Scholar] [CrossRef]
  4. Nakaguchi, V.M.; Ahamed, T. Artificial Intelligence in Agriculture: Commitment to Establish Society 5.0: An Analytical Concepts Mapping for Deep Learning Application. In IoT and AI in Agriculture: Self-Sufficiency in Food Production to Achieve Society 5.0 and SDG’s Globally; Ahamed, T., Ed.; Springer Nature: Singapore, 2023; pp. 133–152. [Google Scholar] [CrossRef]
  5. Muangprathub, J.; Boonnam, N.; Kajornkasirat, S.; Lekbangpong, N.; Wanichsombat, A.; Nillaor, P. IoT and agriculture data analysis for smart farm. Comput. Electron. Agric. 2019, 156, 467–474. [Google Scholar] [CrossRef]
  6. Martos, V.; Ahmad, A.; Cartujo, P.; Ordoñez, J. Ensuring Agricultural Sustainability through Remote Sensing in the Era of Agriculture 5.0. Appl. Sci. 2021, 11, 5911. [Google Scholar] [CrossRef]
  7. Barricelli, B.R.; Casiraghi, E.; Fogli, D. A Survey on Digital Twin: Definitions, Characteristics, Applications, and Design Implications. IEEE Access 2019, 7, 167653–167671. [Google Scholar] [CrossRef]
  8. Pylianidis, C.; Osinga, S.; Athanasiadis, I.N. Introducing digital twins to agriculture. Comput. Electron. Agric. 2021, 184, 105942. [Google Scholar] [CrossRef]
  9. Nasirahmadi, A.; Hensel, O. Toward the Next Generation of Digitalization in Agriculture Based on Digital Twin Paradigm. Sensors 2022, 22, 498. [Google Scholar] [CrossRef] [PubMed]
  10. Fuentealba, D.; Flores, C.; Soto, I.; Zamorano, R.; Reid, S. Guidelines for Digital Twins in 5G Agriculture. In Proceedings of the 2022 13th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Porto, Portugal, 20–22 July 2022; pp. 613–618. [Google Scholar] [CrossRef]
  11. Cesco, S.; Sambo, P.; Borin, M.; Basso, B.; Orzes, G.; Mazzetto, F. Smart agriculture and digital twins: Applications and challenges in a vision of sustainability. Eur. J. Agron. 2023, 146, 126809. [Google Scholar] [CrossRef]
  12. Purcell, W.; Neubauer, T. Digital Twins in Agriculture: A State-of-the-art review. Smart Agric. Technol. 2023, 3, 100094. [Google Scholar] [CrossRef]
  13. Barnard, A. In the Digital Indoor Garden. 2019. Available online: https://www.siemens.com/global/en/company/stories/research-technologies/digitaltwin/digital-indoor-garden.html (accessed on 11 November 2023).
  14. Branthôme, F.X. Digital Twins for Tomatoes, Food and Farming. 2020. Available online: https://www.tomatonews.com/en/digital-twins-for-tomatoes-food-and-farming_2_1096.html (accessed on 11 November 2023).
  15. Alves, R.G.; Souza, G.; Maia, R.F.; Tran, A.L.H.; Kamienski, C.; Soininen, J.P.; Aquino, P.T.; Lima, F. A digital twin for smart farming. In Proceedings of the 2019 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 17–20 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  16. Biglia, A.; Zaman, S.; Gay, P.; Ricauda Aimonino, D.; Comba, L. 3D point cloud density-based segmentation for vine rows detection and localisation. Comput. Electron. Agric. 2022, 199, 107166. [Google Scholar] [CrossRef]
  17. Ghandar, A.; Ahmed, A.; Zulfiqar, S.; Hua, Z.; Hanai, M.; Theodoropoulos, G. A Decision Support System for Urban Agriculture Using Digital Twin: A Case Study with Aquaponics. IEEE Access 2021, 9, 35691–35708. [Google Scholar] [CrossRef]
  18. Alselek, M.; Alcaraz-Calero, J.; Segura-Garcia, J.; Wang, Q. Water IoT Monitoring System for Aquaponics Health and Fishery Applications. Sensors 2022, 22, 7679. [Google Scholar] [CrossRef] [PubMed]
  19. Mulyani, G.S.; Adhitya, Y.; Köppen, M. Design and Implementation of Farmer Digital Twin Control in Smart Farming. In Proceedings of the International Conference on Intelligent Networking and Collaborative Systems, Chiang Mai, Thailand, 6–8 September 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 516–527. [Google Scholar]
  20. Angin, P.; Anisi, M.H.; Göksel, F.; Gürsoy, C.; Büyükgülcü, A. AgriLoRa: A digital twin framework for smart agriculture. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. 2020, 11, 77–96. [Google Scholar] [CrossRef]
  21. Digital Twin Solutions for Smart Farming. 2019. Available online: https://www.rdworldonline.com/rd-100-2019-winner/digital-twin-solutions-for-smart-farming/ (accessed on 11 July 2023).
  22. Grenzdörffer, G.; Engel, A.; Teichert, B. The photogrammetric potential of low-cost UAVs in forestry and agriculture. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2008, 31, 1207–1214. [Google Scholar]
  23. Comba, L.; Biglia, A.; Ricauda Aimonino, D.; Gay, P. Unsupervised detection of vineyards by 3D point-cloud UAV photogrammetry for precision agriculture. Comput. Electron. Agric. 2018, 155, 84–95. [Google Scholar] [CrossRef]
  24. Janoušek, J.; Jambor, V.; Marcoň, P.; Dohnal, P.; Synková, H.; Fiala, P. Using UAV-Based Photogrammetry to Obtain Correlation between the Vegetation Indices and Chemical Analysis of Agricultural Crops. Remote. Sens. 2021, 13, 1878. [Google Scholar] [CrossRef]
  25. Gilliot, J.; Vaudour, E.; Michelin, J. Soil surface roughness measurement: A new fully automatic photogrammetric approach applied to agricultural bare fields. Comput. Electron. Agric. 2017, 134, 63–78. [Google Scholar] [CrossRef]
  26. Gené-Mola, J.; Sanz-Cortiella, R.; Rosell-Polo, J.R.; Escola, A.; Gregorio, E. In-field apple size estimation using photogrammetry-derived 3D point clouds: Comparison of 4 different methods considering fruit occlusions. Comput. Electron. Agric. 2021, 188, 106343. [Google Scholar] [CrossRef]
  27. Linder, W. Digital Photogrammetry; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar] [CrossRef]
  28. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV Photogrammetry for Mapping and 3D Modeling—Current Status and Future Perspectives. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2011, XXXVIII-1/C22, 25–31. [Google Scholar] [CrossRef]
  29. Edemetti, F.; Maiale, A.; Carlini, C.; D’Auria, O.; Llorca, J.; Maria Tulino, A. Vineyard Digital Twin: Construction and characterization via UAV images—DIWINE Proof of Concept. In Proceedings of the 2022 IEEE 23rd International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Belfast, UK, 14–17 June 2022; pp. 601–606. [Google Scholar] [CrossRef]
  30. Peladarinos, N.; Piromalis, D.; Cheimaras, V.; Tserepas, E.; Munteanu, R.A.; Papageorgas, P. Enhancing Smart Agriculture by Implementing Digital Twins: A Comprehensive Review. Sensors 2023, 23, 7128. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Sun, W.; Min, X.; Wang, T.; Lu, W.; Zhai, G. No-Reference Quality Assessment for 3D Colored Point Cloud and Mesh Models. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7618–7631. [Google Scholar] [CrossRef]
  32. Delgado-Vera, C.; Aguirre-Munizaga, M.; Jiménez-Icaza, M.; Manobanda-Herrera, N.; Rodríguez-Méndez, A. A photogrammetry software as a tool for precision agriculture: A case study. In Proceedings of the International Conference on Technologies and Innovation, Guayaquil, Ecuador, 24–27 October 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 282–295. [Google Scholar] [CrossRef]
  33. Rao, A.N. Food, Agriculture and Education: Science and Technology Education and Future Human Needs; Pergamon Press: Oxford, UK, 2013; Volume 6. [Google Scholar]
Figure 1. Toolkit scheme for the analysis.
Figure 1. Toolkit scheme for the analysis.
Applsci 14 01709 g001
Figure 2. Visual comparison of colored point clouds with different tools.
Figure 2. Visual comparison of colored point clouds with different tools.
Applsci 14 01709 g002
Figure 3. Orthophoto generated by ODM of the test flight carried out in May 2023 in El Puig, Valencia.
Figure 3. Orthophoto generated by ODM of the test flight carried out in May 2023 in El Puig, Valencia.
Applsci 14 01709 g003
Figure 4. Mean (and standard deviation) values for the curvature metric for every piece of software.
Figure 4. Mean (and standard deviation) values for the curvature metric for every piece of software.
Applsci 14 01709 g004
Figure 5. Mean (and standard deviation) values for the anisotropy metric for every piece of software.
Figure 5. Mean (and standard deviation) values for the anisotropy metric for every piece of software.
Applsci 14 01709 g005
Figure 6. Mean (and standard deviation) values for the linearity metric for every piece of software.
Figure 6. Mean (and standard deviation) values for the linearity metric for every piece of software.
Applsci 14 01709 g006
Figure 7. Mean (and standard deviation) values for the flatness metric for every piece of software.
Figure 7. Mean (and standard deviation) values for the flatness metric for every piece of software.
Applsci 14 01709 g007
Figure 8. Mean (and standard deviation) values for the sphericity metric for every piece of software.
Figure 8. Mean (and standard deviation) values for the sphericity metric for every piece of software.
Applsci 14 01709 g008
Figure 9. Visual comparison of a detailed colored point cloud with the different selected tools.
Figure 9. Visual comparison of a detailed colored point cloud with the different selected tools.
Applsci 14 01709 g009
Figure 10. Correlation-graphic for all the CV values of the quality metrics.
Figure 10. Correlation-graphic for all the CV values of the quality metrics.
Applsci 14 01709 g010
Table 1. Qualitative analysis of the tools analyzed. N/A: not available.
Table 1. Qualitative analysis of the tools analyzed. N/A: not available.
Hardware RequirementsFree
Software
APIImage Processing
Multispectral
GeoreferencingElevation
Models
Distance
Measurements
Distributed
Processing
Agisoft MetashapeMinimum:
- Between 16 and 32 GB of RAM
- CPU: Intel processor or
AMD with 4/8 cores
- GPU: NVIDIA or AMD
with more than 700 cores
Pix4dMapperMinimum:
- Between 4 and 16 GB RAM

Recommended:
- Between 16 and 32 GB of RAM
- SSD hard drive, between 15 and
120 GB of free space
OpenDroneMapMinimum:
- CPU 64 bit
- 20 GB of disk space
- 4 GB of RAM

Recommended:
- CPU last generation
- 100 GB of disk space
- 16 GB of RAM
DJI TerraMinimum:
- NVIDIA graphic card
- 32 GB of RAM
- OS 64-bit Windows 7 or higher

Recommended:
- NVIDIA graphic card
GeForce GTX 2070 or higher
MeshroomMinimum:
- OS: Windows x64, Linux, macOS
- CPU: recent Intel or AMD
- RAM: 8 GB
- NVIDIA CUDA-enabled GPU

Recommended:
- CPU: Intel Core i7 or AMD Ryzen 7
- RAM: 32 GB
- Hard Drive: 20 GB+ HDD or SDD
- GPU: NVIDIA GeForce GTX 1070
N/A
Regard3D- OS: 64-bit versions
of Windows and Mac OS X
- Windows 7 or newer
- Mac OS X 10.7 or newer
- RAM recommended: 8 GB
or more
N/AN/A
RealityCaptureMinimum:
- 64-bit PC with at
least 8 GB of RAM
- 64-bit Microsoft Windows version
7/8/8.1/10 or Windows Server
version 2008+
- NVIDIA graphics card with
CUDA 3.0+ capabilities and
1 GB RAM
N/A
3DF ZephyrMinimum:
- OS: Windows 11/10/8.1/8 (64 bit)
- RAM: 16 GB
- Hard disk space: 10 GB
of free HDD space
- Processor: dual core 2.0 GHz
or equivalent processor

Recommended:
- RAM: 32 GB
- Hard disk: 20 GB free
HDD Space—SSD drive
N/A
Table 2. Number of points obtained with each piece of software.
Table 2. Number of points obtained with each piece of software.
SoftwareNumber of Points
Agisoft Photoscan81,298,807
Pix4D Mapper19,218,130
DJI Terra5,737,547
OpenDroneMap21,569,057
Meshroom4,599,051
Regard3D10,163,553
Reality Capture (EDU)535,345
3DF Zephyr2,077,621
Table 3. Coefficients of variation (CV) for every metric and software tool.
Table 3. Coefficients of variation (CV) for every metric and software tool.
SoftwareSphPlaLinAniCur
Agisoft0.62190.53990.39510.16630.5087
Pix4DMapper1.30130.34270.48770.09681.2058
ODM1.13530.34110.44400.11951.0314
DJI Terra0.53730.42200.43140.12490.9475
Meshroom1.00000.30060.42421.00001.0000
Regard3D1.13240.30700.42610.05471.0644
RealityCapture1.06390.50250.36200.08040.9373
3DF Zephyr1.22190.28560.43330.07251.1512
Table 4. Summary of the multilinear analysis with all the CVs used to model the synthetic quality metric with 3 parameters.
Table 4. Summary of the multilinear analysis with all the CVs used to model the synthetic quality metric with 3 parameters.
EstimateStd. Errort ValuePr (> | t | )
(Intercept)1.24341.81060.6870.530
Sph1.45040.73471.9740.120
Pla−0.24092.2934−0.1050.921
Cur−0.14391.2560−0.1150.914
Table 5. Synthetic quality metric as a result of the application of the multilinear model to the CVs (with 3 parameters).
Table 5. Synthetic quality metric as a result of the application of the multilinear model to the CVs (with 3 parameters).
SoftwareMulti-Linear Model
Agisoft1.6509
Pix4D Mapper2.4437
ODM2.6596
DJI Terra1.2895
Meshroom2.1059
Regard3D2.2601
RealityCapture (EDU)2.1510
3DF Zephyr2.3641
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Catala-Roman, P.; Navarro, E.A.; Segura-Garcia, J.; Garcia-Pineda, M. Harnessing Digital Twins for Agriculture 5.0: A Comparative Analysis of 3D Point Cloud Tools. Appl. Sci. 2024, 14, 1709. https://doi.org/10.3390/app14051709

AMA Style

Catala-Roman P, Navarro EA, Segura-Garcia J, Garcia-Pineda M. Harnessing Digital Twins for Agriculture 5.0: A Comparative Analysis of 3D Point Cloud Tools. Applied Sciences. 2024; 14(5):1709. https://doi.org/10.3390/app14051709

Chicago/Turabian Style

Catala-Roman, Paula, Enrique A. Navarro, Jaume Segura-Garcia, and Miguel Garcia-Pineda. 2024. "Harnessing Digital Twins for Agriculture 5.0: A Comparative Analysis of 3D Point Cloud Tools" Applied Sciences 14, no. 5: 1709. https://doi.org/10.3390/app14051709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop