sensors-logo

Journal Browser

Journal Browser

Advances on UAV-Based Sensing and Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 12328

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Crete, Greece
Interests: digital image and signal processing; biomedical applications; remote sensing applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Kounoupidiana Campus, DISPLAY Laboratory, School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Crete, Greece
Interests: remote sensing; imaging for object detection; deep neuronal networks; machine learning; UAV; surveillance systems; non-invasive diagnostic and therapeutic tools; brain connectivity; electromagnetic source imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unnamed Aerial Vehicle (UAV)-based sensing and imaging aims at compiling the recent trends on UAV-equipped sensors and analytics for studying critical conditions and assessing useful interpretations of physical events on the earh surface. Particular focus is on events relating to flora and fauna, including terrain development and the motion of species, even through foliage. Contributions of interest cover the entire range of operations, from the capturing of imaging from any kind of remote sensor (optical, acoustic, hyperspectal, Infrared, RADAR/SAR, LIDAR, etc.) to the multiple processing tasks, machine learning, and computer vision algorithms. In recent years, applications conducted with statistical and machine learning algorithms are mainly focused on classification tasks from optical and biomedical images. The increase in remote sensing systems allowed a wide collection of data from any target on the Earth’s surface. In this regard, and due to recent technological breakthroughs, aerial imaging has become a common approach to acquiring data with the advent of UAV. Surface mapping with UAV platforms presents some advantages compared to orbital and other aerial sensing methods of acquisition. The real challenge in remote sensing approaches is to obtain automatic, rapid, and accurate information from this type of data. In recent years, the advent of deep learning techniques has offered robust and intelligent methods to improve the mapping of the Earth’s surface. Thus, our goal is to welcome new contributions to methods and applications that impact the advances on UAV-based sensing and imaging through novelty, practicality, and easy human application of the proposed method and application.

The present Special Issue aims to collect and categorize recent scientific contributions that are closely linked to UAV sensing and imaging, including advanced sensing technologies, multisensory systems, and state-of-the-art algorithmic developments. In recent years, we have experienced a breakthrough in sensor developments targeting real-time object detection on embedded systems (autonomous or not), multi-sensing devices for collecting multiple aspects of information (e.g., optical, satellite imaging, acoustic, hyperspectal, Infrared, RADAR based, LIDAR based, etc.), internet of things, as well as advanced technological developments on digital telecommunication. This explosion is also attributed to the advancement of deep neural networks, which exhibit impressive capabilities for extracting and learning natural representations, but requires the existence of enormous amounts of data for effective training. With these advances, recently, UAV have dominated remote sensing research. For both scientific and development targets, we consider this wealth of information in a Special Issue of the high visibility and relevant scientific journal Sensors.

Prof. Dr. Michalis Zervakis
Dr. Marios Antonakakis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • unmanned aerial vehicles
  • multisensory imaging
  • maching learning
  • deep neural networks
  • embedded systems
  • through foliage tracking systems

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 5193 KiB  
Article
Enhanced Lightweight YOLOX for Small Object Wildfire Detection in UAV Imagery
by Tian Luan, Shixiong Zhou, Guokang Zhang, Zechun Song, Jiahui Wu and Weijun Pan
Sensors 2024, 24(9), 2710; https://doi.org/10.3390/s24092710 - 24 Apr 2024
Viewed by 417
Abstract
Target detection technology based on unmanned aerial vehicle (UAV)-derived aerial imagery has been widely applied in the field of forest fire patrol and rescue. However, due to the specificity of UAV platforms, there are still significant issues to be resolved such as severe [...] Read more.
Target detection technology based on unmanned aerial vehicle (UAV)-derived aerial imagery has been widely applied in the field of forest fire patrol and rescue. However, due to the specificity of UAV platforms, there are still significant issues to be resolved such as severe omission, low detection accuracy, and poor early warning effectiveness. In light of these issues, this paper proposes an improved YOLOX network for the rapid detection of forest fires in images captured by UAVs. Firstly, to enhance the network’s feature-extraction capability in complex fire environments, a multi-level-feature-extraction structure, CSP-ML, is designed to improve the algorithm’s detection accuracy for small-target fire areas. Additionally, a CBAM attention mechanism is embedded in the neck network to reduce interference caused by background noise and irrelevant information. Secondly, an adaptive-feature-extraction module is introduced in the YOLOX network’s feature fusion part to prevent the loss of important feature information during the fusion process, thus enhancing the network’s feature-learning capability. Lastly, the CIoU loss function is used to replace the original loss function, to address issues such as excessive optimization of negative samples and poor gradient-descent direction, thereby strengthening the network’s effective recognition of positive samples. Experimental results show that the improved YOLOX network has better detection performance, with mAP@50 and mAP@50_95 increasing by 6.4% and 2.17%, respectively, compared to the traditional YOLOX network. In multi-target flame and small-target flame scenarios, the improved YOLO model achieved a mAP of 96.3%, outperforming deep learning algorithms such as FasterRCNN, SSD, and YOLOv5 by 33.5%, 7.7%, and 7%, respectively. It has a lower omission rate and higher detection accuracy, and it is capable of handling small-target detection tasks in complex fire environments. This can provide support for UAV patrol and rescue applications from a high-altitude perspective. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

18 pages, 3138 KiB  
Article
Implementation of Lightweight Convolutional Neural Networks with an Early Exit Mechanism Utilizing 40 nm CMOS Process for Fire Detection in Unmanned Aerial Vehicles
by Yu-Pei Liang, Chen-Ming Chang and Ching-Che Chung
Sensors 2024, 24(7), 2265; https://doi.org/10.3390/s24072265 - 2 Apr 2024
Viewed by 543
Abstract
The advancement of unmanned aerial vehicles (UAVs) enables early detection of numerous disasters. Efforts have been made to automate the monitoring of data from UAVs, with machine learning methods recently attracting significant interest. These solutions often face challenges with high computational costs and [...] Read more.
The advancement of unmanned aerial vehicles (UAVs) enables early detection of numerous disasters. Efforts have been made to automate the monitoring of data from UAVs, with machine learning methods recently attracting significant interest. These solutions often face challenges with high computational costs and energy usage. Conventionally, data from UAVs are processed using cloud computing, where they are sent to the cloud for analysis. However, this method might not meet the real-time needs of disaster relief scenarios. In contrast, edge computing provides real-time processing at the site but still struggles with computational and energy efficiency issues. To overcome these obstacles and enhance resource utilization, this paper presents a convolutional neural network (CNN) model with an early exit mechanism designed for fire detection in UAVs. This model is implemented using TSMC 40 nm CMOS technology, which aids in hardware acceleration. Notably, the neural network has a modest parameter count of 11.2 k. In the hardware computation part, the CNN circuit completes fire detection in approximately 230,000 cycles. Power-gating techniques are also used to turn off inactive memory, contributing to reduced power consumption. The experimental results show that this neural network reaches a maximum accuracy of 81.49% in the hardware implementation stage. After automatic layout and routing, the CNN hardware accelerator can operate at 300 MHz, consuming 117 mW of power. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

20 pages, 23479 KiB  
Article
Methodology for Creating a Digital Bathymetric Model Using Neural Networks for Combined Hydroacoustic and Photogrammetric Data in Shallow Water Areas
by Małgorzata Łącka and Jacek Łubczonek
Sensors 2024, 24(1), 175; https://doi.org/10.3390/s24010175 - 28 Dec 2023
Viewed by 753
Abstract
This study uses a neural network to propose a methodology for creating digital bathymetric models for shallow water areas that are partially covered by a mix of hydroacoustic and photogrammetric data. A key challenge of this approach is the preparation of the training [...] Read more.
This study uses a neural network to propose a methodology for creating digital bathymetric models for shallow water areas that are partially covered by a mix of hydroacoustic and photogrammetric data. A key challenge of this approach is the preparation of the training dataset from such data. Focusing on cases in which the training dataset covers only part of the measured depths, the approach employs generalized linear regression for data optimization followed by multilayer perceptron neural networks for bathymetric model creation. The research assessed the impact of data reduction, outlier elimination, and regression surface-based filtering on neural network learning. The average values of the root mean square (RMS) error were successively obtained for the studied nearshore, middle, and deep water areas, which were 0.12 m, 0.03 m, and 0.06 m, respectively; moreover, the values of the mean absolute error (MAE) were 0.11 m, 0.02 m, and 0.04 m, respectively. Following detailed quantitative and qualitative error analyses, the results indicate variable accuracy across different study areas. Nonetheless, the methodology demonstrated effectiveness in depth calculations for water bodies, although it faces challenges with respect to accuracy, especially in preserving nearshore values in shallow areas. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

16 pages, 3862 KiB  
Article
A UAV Intelligent System for Greek Power Lines Monitoring
by Aikaterini Tsellou, George Livanos, Dimitris Ramnalis, Vassilis Polychronos, Georgios Plokamakis, Michalis Zervakis and Konstantia Moirogiorgou
Sensors 2023, 23(20), 8441; https://doi.org/10.3390/s23208441 - 13 Oct 2023
Viewed by 1475
Abstract
Power line inspection is one important task performed by electricity distribution network operators worldwide. It is part of the equipment maintenance for such companies and forms a crucial procedure since it can provide diagnostics and prognostics about the condition of the power line [...] Read more.
Power line inspection is one important task performed by electricity distribution network operators worldwide. It is part of the equipment maintenance for such companies and forms a crucial procedure since it can provide diagnostics and prognostics about the condition of the power line network. Furthermore, it helps with effective decision making in the case of fault detection. Nowadays, the inspection of power lines is performed either using human operators that scan the network on foot and search for obvious faults, or using unmanned aerial vehicles (UAVs) and/or helicopters equipped with camera sensors capable of recording videos of the power line network equipment, which are then inspected by human operators offline. In this study, we propose an autonomous, intelligent inspection system for power lines, which is equipped with camera sensors operating in the visual (Red–Green–Blue (RGB) imaging) and infrared (thermal imaging) spectrums, capable of providing real-time alerts about the condition of power lines. The very first step in power line monitoring is identifying and segmenting them from the background, which constitutes the principal goal of the presented study. The identification of power lines is accomplished through an innovative hybrid approach that combines RGB and thermal data-processing methods under a custom-made drone platform, providing an automated tool for in situ analyses not only in offline mode. In this direction, the human operator role is limited to the flight-planning and control operations of the UAV. The benefits of using such an intelligent UAV system are many, mostly related to the timely and accurate detection of possible faults, along with the side benefits of personnel safety and reduced operational costs. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

21 pages, 6167 KiB  
Article
Land and Seabed Surface Modelling in the Coastal Zone Using UAV/USV-Based Data Integration
by Oktawia Specht
Sensors 2023, 23(19), 8020; https://doi.org/10.3390/s23198020 - 22 Sep 2023
Cited by 2 | Viewed by 792
Abstract
The coastal zone is an area that includes the sea coast and adjacent parts of the land and sea, where the mutual interaction of these environments is clearly marked. Hence, the modelling of the land and seabed parts of the coastal zone is [...] Read more.
The coastal zone is an area that includes the sea coast and adjacent parts of the land and sea, where the mutual interaction of these environments is clearly marked. Hence, the modelling of the land and seabed parts of the coastal zone is crucial and necessary in order to determine the dynamic changes taking place in this area. The accurate determination of the terrain in the coastal zone is now possible thanks to the use of Unmanned Aerial Vehicles (UAVs) and Unmanned Surface Vehicles (USVs). The aim of this article is to present land and seabed surface modelling in the coastal zone using UAV/USV-based data integration. Bathymetric and photogrammetric measurements were carried out on the waterbody adjacent to a public beach in Gdynia (Poland) in 2022 using the DJI Phantom 4 Real Time Kinematic (RTK) UAV and the AutoDron USV. As a result of geospatial data integration, topo-bathymetric models in the coastal zone were developed using the following terrain-modelling methods: Inverse Distance to a Power (IDP), kriging, Modified Shepard’s Method (MSM) and Natural Neighbour Interpolation (NNI). Then, the accuracies of the selected models obtained using the different interpolation methods, taking into account the division into land and seabed parts, were analysed. Research has shown that the most accurate method for modelling both the land and seabed surfaces of the coastal zone is the kriging (linear model) method. The differences between the interpolated and measurement values of the R95 measurement are 0.032 m for the land part and 0.034 m for the seabed part. It should also be noted that the data interpolated by the kriging (linear model) method showed a very good fit to the measurement data recorded by the UAVs and USVs. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

15 pages, 8522 KiB  
Article
Object Detection for UAV Aerial Scenarios Based on Vectorized IOU
by Shun Lu, Hanyu Lu, Jun Dong and Shuang Wu
Sensors 2023, 23(6), 3061; https://doi.org/10.3390/s23063061 - 13 Mar 2023
Cited by 5 | Viewed by 2235
Abstract
Object detection in unmanned aerial vehicle (UAV) images is an extremely challenging task and involves problems such as multi-scale objects, a high proportion of small objects, and high overlap between objects. To address these issues, first, we design a Vectorized Intersection Over Union [...] Read more.
Object detection in unmanned aerial vehicle (UAV) images is an extremely challenging task and involves problems such as multi-scale objects, a high proportion of small objects, and high overlap between objects. To address these issues, first, we design a Vectorized Intersection Over Union (VIOU) loss based on YOLOv5s. This loss uses the width and height of the bounding box as a vector to construct a cosine function that corresponds to the size of the box and the aspect ratio and directly compares the center point value of the box to improve the accuracy of the bounding box regression. Second, we propose a Progressive Feature Fusion Network (PFFN) that addresses the issue of insufficient semantic extraction of shallow features by Panet. This allows each node of the network to fuse semantic information from deep layers with features from the current layer, thus significantly improving the detection ability of small objects in multi-scale scenes. Finally, we propose an Asymmetric Decoupled (AD) head, which separates the classification network from the regression network and improves the classification and regression capabilities of the network. Our proposed method results in significant improvements on two benchmark datasets compared to YOLOv5s. On the VisDrone 2019 dataset, the performance increased by 9.7% from 34.9% to 44.6%, and on the DOTA dataset, the performance increased by 2.1%. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

15 pages, 9693 KiB  
Article
An Acoustic Camera for Use on UAVs
by Iva Salom, Goran Dimić, Vladimir Čelebić, Marko Spasenović, Milica Raičković, Mirjana Mihajlović and Dejan Todorović
Sensors 2023, 23(2), 880; https://doi.org/10.3390/s23020880 - 12 Jan 2023
Cited by 3 | Viewed by 2675
Abstract
Airborne acoustic surveillance would enable and ease several applications, including security surveillance, urban and industrial noise monitoring, rescue missions, and wildlife monitoring. Airborne surveillance with an acoustic camera mounted on an airship would provide the deployment flexibility and utility required by these applications. [...] Read more.
Airborne acoustic surveillance would enable and ease several applications, including security surveillance, urban and industrial noise monitoring, rescue missions, and wildlife monitoring. Airborne surveillance with an acoustic camera mounted on an airship would provide the deployment flexibility and utility required by these applications. Nevertheless, and problematically for these applications, there is not a single acoustic camera mounted on an airship yet. We make significant advances towards solving this problem by designing and constructing an acoustic camera for direct mounting on the hull of a UAV airship. The camera consists of 64 microphones, a central processing unit, and software for data acquisition and processing dedicatedly developed for far-field low-level acoustic signal detection. We demonstrate a large-aperture mock-up camera operation on the ground, although all preparations have been made to integrate the camera onto an airship. The camera has an aperture of 2 m and has been designed for surveillance from a height up to 300 m, with a spatial resolution of 12 m. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

22 pages, 8333 KiB  
Article
Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm
by Marta Lalak and Damian Wierzbicki
Sensors 2022, 22(17), 6611; https://doi.org/10.3390/s22176611 - 1 Sep 2022
Cited by 8 | Viewed by 1950
Abstract
Unmanned Aerial Vehicles (UAVs) are able to guarantee very high spatial and temporal resolution and up-to-date information in order to ensure safety in the direct vicinity of the airport. The current dynamic growth of investment areas in large agglomerations, especially in the neighbourhood [...] Read more.
Unmanned Aerial Vehicles (UAVs) are able to guarantee very high spatial and temporal resolution and up-to-date information in order to ensure safety in the direct vicinity of the airport. The current dynamic growth of investment areas in large agglomerations, especially in the neighbourhood of airports, leads to the emergence of objects that may constitute a threat for air traffic. In order to ensure that the obtained spatial data are accurate, it is necessary to understand the detection of atypical aviation obstacles by means of their identification and classification. Quite often, a common feature of atypical aviation obstacles is their elongated shape and irregular cross-section. These factors pose a challenge for modern object detection techniques when the processes used to determine their height are automated. This paper analyses the possibilities for the automated detection of atypical aviation obstacles based on the YOLO algorithm and presents an analysis of the accuracy of the determination of their height based on data obtained from UAV. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop