Next Article in Journal
Design, Implementation and Validation of a Bluetooth 5 Real-Time Monitoring System for Large Indoor Environments
Previous Article in Journal
RGen: Data Generator for Benchmarking Big Data Workloads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Embedding ROS and AI-Based Perception Capabilities in a Novel Low-Cost Assistive Robotic Platform †

by
Jaime Mas-Santillán
*,
Francisco Javier Acevedo-Rodríguez
and
Roberto Javier López-Sastre
GRAM, Department of Signal Theory and Communications, University of Alcalá, 28805 Alcalá de Henares, Spain
*
Author to whom correspondence should be addressed.
Presented at the 4th XoveTIC Conference, A Coruña, Spain, 7–8 October 2021.
Eng. Proc. 2021, 7(1), 12; https://doi.org/10.3390/engproc2021007012
Published: 2 October 2021
(This article belongs to the Proceedings of The 4th XoveTIC Conference)

Abstract

:
This paper describes how we developed a novel low-cost assistive robotic platform, with AI-based perception capabilities, able to navigate autonomously using Robot Operating System (ROS). The platform is a differential wheeled robot, equipped with two motors and encoders, which are controlled with an Arduino board. It also includes a Jetson Xavier processing board on which we deploy all AI processes, and the ROS architecture. As a result of the work, we have a fully functional platform, able to recognize actions online, and navigate autonomously through environments whose map has been preloaded.

1. Introduction

Research in assistive robotics is experiencing a tremendous boom. More and more robotic platforms are emerging to assist people with functional diversity or the elderly (e.g., [1,2,3]). This is all due to major advances in robotics and artificial intelligence in recent years. However, despite all the advances, most of these commercial platforms are quite expensive.
In this work, we focus on the development of a low-cost assistive robotic platform, with advanced navigation and AI-based perception capabilities. For us, there is no doubt that cost-effectiveness must be considered in the context of assistive technology, because the final deployment of robots to end-users usually results in an expense paid by families or health services. Therefore, the main contribution of this work is the construction of a novel low-cost assistive robot. We have been able to develop a fully functional robotic platform, capable of navigating autonomously and recognizing the actions of its users, where all the functionalities are integrated into a Robot Operating System (ROS) architecture.

2. Low-Cost Robotic Platform

2.1. Hardware Description

The platform is a differential wheeled robot, equipped with two motors and their corresponding encoders, which are controlled with an Arduino board. The internal structure is constructed of wood and metal. The outer shell, imitating a person wearing a tuxedo, was made entirely by 3D printing. The complete platform measures approximately 800 mm, slightly higher than a table. As for the sensors, the platform has: one LIDAR, a touch screen, and a frontal camera. The LIDAR takes the measurements of the obstacles used for navigation and localization purposes. Finally, we have a Jetson Xavier board with ROS integration. Figure 1a shows a picture of the robot.

2.2. ROS Based Software Architecture

The system architecture is based on ROS Melodic. Figure 1b shows a diagram with the complete ROS architecture we have implemented, mainly using Python. Green nodes represent distributed software ROS packages, and orange nodes are those ROS nodes, entirely developed by us, to integrate ROS into our platform.
The core of the navigation system is based on the ROS Navigation Stack, that is fed with: a map provided by the map server, the odometry source from the wheels, the LIDAR information, and the estimated position, provided in our case by the Adaptive Monte Carlo Localization (AMCL) package. The interface with the user is done by means of RVIZ. This ROS package provides a visual environment where the map and the estimated robot position are shown. The user can indicate an initial position to help the robot to localize itself and can order the robot to move to a desired position, specifying yaw rotation. RVIZ will send ROS messages to the move_base core package which generates the linear and angular velocity commands needed to move the robot to the desired position. All these commands, and their parameters, are continuously updated with the information of new possible obstacles, the odometry, and the estimated localization.
In order to fully embed ROS in our robotic platform, we have developed the Hardware Interface package that: (1) translates ROS commands to the Arduino board; and (2) sends, back to the ROS system, the messages with the specific format needed to complete the communication loop with the Arduino. The Arduino package in Figure 1b includes the developed libraries to establish the communication between the engines of the platform and ROS, using a serial port. In particular, we have designed a communication protocol with commands that allow us to read the data from the encoders, and to generate a sequence of speed commands to be transmitted to each wheel.
As for the AI perception capabilities, the architecture integrates a specific node for the Online Action Detection (OAD), see Figure 1b. In this node we have deployed a 3D convolutional neural network model [4], which is able to recognize the actions performed by the user of the platform. All this processing is integrated in the ROS architecture, which means that the OAD system can run on an external node. In tests in the laboratory, we have tried to run the OAD node on a workstation, so that the Jetson board is relieved of all the intensive processing required by this AI functionality.

3. Experiments

We have tested the platform in the building of the Polytechnic School of the University of Alcalá. For doing so, we have created a set of maps of different floors of this building, which can be used with the RVIZ application.
In the experiments, we have started the robot from different departure positions, with different target locations, and various orientations. We have also tested how the platform interacts with dynamic obstacles during the navigation. Based on the results, we have observed that in most of the cases the platform manages to reach its objective, and with the correct orientation. Moreover, in all cases the platform never collided with objects or people. For a 10-m run, our robot can take about a minute and a half, but it depends on how many obstacles it has encountered during the trajectory. Sometimes, we have observed that when driving in very open spaces, when the LiDAR does not detect obstacles or walls, the platform needs to stop to recalculate its position, turning on itself. Figure 2a shows RVIZ interface when our robot is navigating. We have recorded a video of the platform navigating: https://youtu.be/qjXZxAmTKXk, accessed on 29 September 2021.
For the OAD, we have embedded in our platform, a model trained using the UCF-101 dataset [5], which exhibits an average clip accuracy of 72.4% for all the 101 action categories in the test set of Split 1 of UCF-101. Figure 2b shows OAD qualitative results.

4. Conclusions

We have developed a low-cost assistive robotic platform that is able to navigate autonomously, detecting dynamic obstacles and following an optimal trajectory. Our robot is also able to recognize actions online, processing the images acquired by an RGB camera. All these processes are fully integrated into an ROS architecture. Future work will be to develop some applications that will allow the platform to interact and help people with functional diversity.

Author Contributions

Conceptualization, all; methodology, R.J.L.-S., F.J.A.-R.; software, J.M.-S., F.J.A.-R.; validation, all; investigation, all; writing—original draft preparation, all; writing—review and editing, all; supervision, R.J.L.-S.; project administration, R.J.L.-S.; funding acquisition, R.J.L.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by: (a) project AIRPLANE, with reference PID2019-104323RB-C31, of Spain’s Ministry of Science and Innovation; and (b) project CM/JIN/2019-022 of the University of Alcalá.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This research uses the publicly available UCF-101 dataset [5].

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Lamas, C.M.; Bellas, F.; Guijarro-Berdiñas, B. SARDAM: Service Assistant Robot for Daily Activity Monitoring. Proceedings 2020, 54, 3. [Google Scholar] [CrossRef]
  2. Martinez-Martin, E.; Costa, A.; Cazorla, M. PHAROS 2.0—A PHysical Assistant RObot System Improved. Sensors 2019, 19, 4531. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Zlatintsi, A.; Dometios, A.; Kardaris, N.; Rodomagoulakis, I.; Koutras, P.; Papageorgiou, X.; Maragos, P.; Tzafestas, C.; Vartholomeos, P.; Hauer, K.; et al. I-Support: A robotic platform of an assistive bathing robot for the elderly population. Robot. Auton. Syst. 2020, 126, 103451. [Google Scholar] [CrossRef]
  4. Baptista-Ríos, M.; López-Sastre, R.J.; Caba Heilbron, F.; Van Gemert, J.C.; Acevedo-Rodríguez, F.J.; Maldonado-Bascón, S. Rethinking Online Action Detection in Untrimmed Videos: A Novel Online Evaluation Protocol. IEEE Access 2019, 8, 5139–5146. [Google Scholar] [CrossRef]
  5. Soomro, K.; Zamir, A.R.; Shah, M. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. arXiv 2012, arXiv:cs.CV/1212.0402. [Google Scholar]
Figure 1. (a) Picture of our novel low-cost robotic platform and (b) ROS Architecture implemented.
Figure 1. (a) Picture of our novel low-cost robotic platform and (b) ROS Architecture implemented.
Engproc 07 00012 g001
Figure 2. (a) Our robotic platform navigating using RVIZ and (b) Qualitative results of the OAD system.
Figure 2. (a) Our robotic platform navigating using RVIZ and (b) Qualitative results of the OAD system.
Engproc 07 00012 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mas-Santillán, J.; Acevedo-Rodríguez, F.J.; López-Sastre, R.J. Embedding ROS and AI-Based Perception Capabilities in a Novel Low-Cost Assistive Robotic Platform. Eng. Proc. 2021, 7, 12. https://doi.org/10.3390/engproc2021007012

AMA Style

Mas-Santillán J, Acevedo-Rodríguez FJ, López-Sastre RJ. Embedding ROS and AI-Based Perception Capabilities in a Novel Low-Cost Assistive Robotic Platform. Engineering Proceedings. 2021; 7(1):12. https://doi.org/10.3390/engproc2021007012

Chicago/Turabian Style

Mas-Santillán, Jaime, Francisco Javier Acevedo-Rodríguez, and Roberto Javier López-Sastre. 2021. "Embedding ROS and AI-Based Perception Capabilities in a Novel Low-Cost Assistive Robotic Platform" Engineering Proceedings 7, no. 1: 12. https://doi.org/10.3390/engproc2021007012

Article Metrics

Back to TopTop