Next Article in Journal
The Neotectonic Deformation of the Eastern Rif Foreland (Morocco): New Insights from Morphostructural Analysis
Previous Article in Journal
Dual Enhancement Network for Infrared Small Target Detection
Previous Article in Special Issue
Participatory Exhibition-Viewing Using Augmented Reality and Analysis of Visitor Behavior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications

Department of Information Engineering, University of Florence, Via di Santa Marta, 3, 50139 Firenze, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(10), 4133; https://doi.org/10.3390/app14104133
Submission received: 25 March 2024 / Revised: 5 May 2024 / Accepted: 11 May 2024 / Published: 13 May 2024

Abstract

:
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, training and testing phases of medical students. This study introduces a technique for enhancing a 3D digital model reconstructed from cone-beam computed tomography images with its real coloured texture using an Intel D435 RGBD camera. This method is based on iteratively projecting the two models onto a 2D plane, identifying their contours and then minimizing the distance between them. Finally, the coloured digital models were displayed in mixed reality through a Microsoft HoloLens 2 and an application to interact with them using hand gestures was developed. The registration error between the two 3D models evaluated using 30,000 random points indicates values of: 1.1 ± 1.3 mm on the x-axis, 0.7 ± 0.8 mm on the y-axis, and 0.9 ± 1.2 mm on the z-axis. This result was achieved in three iterations, starting from an average registration error on the three axes of 1.4 mm to reach 0.9 mm. The heatmap created to visualize the spatial distribution of the error shows how it is uniformly distributed over the surface of the pointcloud obtained with the RGBD camera, except for some areas of the nose and ears where the registration error tends to increase. The obtained results indicate that the proposed methodology seems effective. In addition, since the used RGBD camera is inexpensive, future approaches based on the simultaneous use of multiple cameras could further improve the results. Finally, the augmented reality visualization of the obtained result is innovative and could provide support in all those cases where the visualization of three-dimensional medical models is necessary.

1. Introduction

Augmented reality (AR) combines elements of the real world with digital elements, overlaying virtual information onto the view of the surrounding environment through devices such as smartphones, glasses, or headsets. Mixed reality (MR) is an advanced form of AR that integrates digital elements more deeply with the physical environment, allowing virtual objects to interact in real-time with the real world and vice versa through, for example, user hand gestures. Virtual reality (VR) creates a completely immersive environment separate from the physical reality. Users wear VR headsets to be fully immersed in virtual worlds, isolated from the external environment.
The use of these technologies in the industry is spreading, with applications being developed not only for entertainment and video games but also in many other areas, including medicine and surgery. The surgical areas where these technologies are being introduced are different: orthopaedic surgery [1,2,3,4,5,6], vascular surgery [7,8,9], oncology [10,11,12,13,14], neurosurgery [15,16,17,18] and maxillofacial surgery [19,20,21,22].
AR, MR and VR can also be used for educational purposes. These technologies are being developed to improve and facilitate the learning of complex information, such as learning in physiology and anatomy, where students need a three-dimensional knowledge of human organ systems and structures [23,24,25] but also to train, test and improve medical students’ practical skills and competencies through surgical simulations [26,27,28,29,30].
During surgical operations, AR/MR could provide real-time information to surgeons directly in the operating room, making the use of external screens for imaging potentially useless, which would be all incorporated into the viewer. Moreover, they can enhance the doctor’s view by adding useful information for tools orientation in the surgical site, overlapping virtual models on the patient, whenever the area of interest is visible or not, and also giving information about the surgical planning and its course during the operation.
The reconstruction of a 3D digital model from images obtained with cone-beam computed tomography (CBCT) machines and displayed using these technologies, could provide doctors with an innovative and intuitive visualization method, informing them about the health status of internal tissues. Moreover, it could offer a new way to carry out the activity of image-guided surgical planning, which is crucial to achieving desired treatment outcomes [31]. Other areas of interest are dentistry and maxillofacial surgery where the untextured soft tissue makes it difficult for both the surgeon and the patient to construct a visual concept of the surgical plan [32], additionally, preserving the patient’s aesthetic appearance could be of great interest.
However, since CT technology allows the acquisition of internal tissue images, the reconstructed 3D model wouldn’t have an external texture faithful to the real one. The integration of this information into the digital model could be useful for creating a new digital model that is more realistic and similar to the real one. Adding external texture could assist doctors during training phases by providing them with digital models that closely resemble those they will interact with during real surgical procedures.
Attempting to overcome this limitation, especially for craniofacial research, several studies have been conducted trying to acquire, through different technologies, the outer surface of the subject [33]. In particular, consumer-grade 3D scanning, such as RGB-D (red-green-blue-depth) sensors, which combine red, green, and blue (RGB) data with depth information (D) have been tested. The Azure Kinect development kit (Azure Kinect DK, Microsoft Inc., Redmond, WA, USA), released in 2020, is based on a continuous-wave time of flight (ToF) camera and seems to be a viable 3D scanning solution for clinical and research applications, with a systematic error of less than 2 mm [34,35]. The RAYFace (RayMedical, Ray Co., Ltd., Seongnam, Republic of Korea) is a 3D one-shot face-scanning solution developed in 2020 and reached an absolute surface discrepancy of 0.5277 when compared with other facial scanners [36]. Regarding the Intel 3D camera D435 (Intel Corporation, Santa Clara, CA, USA), one of the most popular RGBD sensors, Singh et al. [33] report that there is little research on its reliability.
In this study, to collect real external texture information, the Intel D435 3D camera was used. Three-dimensional cameras offer a heightened sense of depth and realism compared to traditional two-dimensional imaging. Unlike standard cameras that record a flat scene representation, 3D cameras create images with depth perception, allowing viewers to experience a more immersive visual environment. The principle behind 3D cameras involves the usage of multiple lenses or sensors to capture the same scene from slightly different perspectives, mimicking the way human eyes perceive depth. These variations in perspective are then processed to create pointclouds, which are three-dimensional representations of the surfaces and structures in the captured scene. Pointclouds are collections of individual data points in 3D space, each point representing a specific location and contributing to the overall depth information of the scene. Often, 3D cameras also integrate standard RGB cameras, allowing the enrichment of pointclouds with the true colours of the objects in the scene. This type of camera is called RGBD camera.
This work aimed to register two 3D pointclouds obtained by different techniques (CBCT machine and RGBD camera), apply the real textures’ colours collected with RGBD camera to the external layer of CBCT pointcloud, and then show the result in the real environment using a mixed reality head-mounted display (HMD). The combined use of the two acquisition techniques could make it possible to obtain, in a single 3D model, the advantages of both techniques: fidelity in the representation of the internal tissues thanks to CBCT machine and of the external texture thanks to the RGBD camera. Furthermore, the integration of this model in a mixed reality system would allow the user to interact directly with the digital models through hand gestures, improving the user experience.

2. Materials and Methods

This work was developed using Qt development environment (version 5.12.6) [37] and C++ programming language. RGBD three-dimensional pointclouds were acquired with an Intel 3D camera D435 [38], CBCT images were acquired with a See Factor CT3 machine (Epica—Imaginalis, Florence, Italy) while Unity 2020 [39] software was used to develop an application designed for execution on the AR head-mounted display Microsoft HoloLens 2 [40]. The PC on which all the processing was performed was equipped with an Intel i7—11,700 KF CPU, an NVIDIA GeForce RTX 3060 Ti GPU, and 32 GB DDR4 RAM at 3200 MHz. The human-head mimicking phantom that was used for the test is shown in Figure 1.
The methodology for acquiring, registering, and texturizing pointclouds proposed in this article consists of multiple steps. To facilitate understanding of the adopted logical flow, a conceptual diagram summarizing the main steps is shown in Figure 2.
The description of the process related to the pointcloud creation, based on CBCT images, is provided in Section 2.1, while that concerning the RGBD pointcloud is described in Section 2.2. In Section 2.3, the description of the registration and texturization processes is reported. Finally, Section 2.4 outlines the methodology used for visualization and interaction in mixed reality with the digital 3D model.

2.1. CT Images Processing

The following paragraphs describe all the processing applied to convert a series of CBCT images into a 3D pointcloud. Since an international standard in the field of digital medical image management is the Digital Imaging and Communications in Medicine (DICOM) standard, within this article, images acquired through the CBCT machine will be referred to as DICOM images.

2.1.1. Segmentation Phase

Firstly, a segmentation process was performed on DICOM images of the entire volume using OpenCV library [41]. In particular, the threshold value to segment the outer contour for each image was evaluated using the Otsu’s method [42]. This algorithm searched for the threshold value that minimises the intra-class variance, defined as a weighted sum of the variances of the two classes. An example of the result obtained by applying Otsu’s method to a DICOM image is shown in Figure 3.
Those DICOM images for which Otsu’s method failed to identify the optimal threshold value were segmented by manually selecting that value.

2.1.2. Creation of a 3D Pointcloud

Since the purpose was to create a 3D pointcloud expressed in meters, it was necessary to convert the values x, y, and z of each point, into coordinates expressed in meters. Because the SeeFactor CT3 machine (Imaginalis S.r.l-Florence-Italy) has an isotropic acquisition volume, the slice thickness is equal to the volume pixel spacing. For this reason, the conversion was performed by multiplying each value by the DICOM standard parameter called slice thickness:
x = X ST 1000 y = Y ST 1000 z = Z ST 1000
where: x, y and z are the 3D values expressed in meters, X, Y and Z are the 3D values expressed in CBCT volume pixels and S T is the slice thickness value. In particular, a slice thickness value of 0.35031 mm was used. The result obtained at the end of this procedure is shown in Figure 4. In the subsequent sections of the article, the pointcloud defined in this way will be referred to as the DICOM-based pointcloud.

2.2. Pointclouds Acquisition with RGBD Camera

The Intel D435 3D camera is capable of both acquiring a 3D pointcloud and simultaneously capturing RGB images through an integrated 2D camera. By overlaying these two types of information, it is possible to obtain an RGBD pointcloud, where each point is coloured based on the associated RGB image. The following paragraphs will describe the procedures adopted for the registration and segmentation of these pointclouds.

2.2.1. Registration Algorithm

Since the acquisition of a single pointcloud did not allow for the optimal reconstruction of the phantom’s surface, especially for areas such as the ears and nose, it was necessary to capture a series of pointclouds from different angles and subsequently register them one respect to the other. The estimation of the rotation and translation of the camera with respect to a reference system is known as camera pose estimation. In this work, RGBD pointclouds were registered using a 7 × 7 ChArUco board as a fiducial marker (Figure 5). A ChArUco board is the combination of two types of markers: a chessboard and ArUcos [43,44] and, in particular, it is composed of a chessboard with white and black cells, where, within the white ones, ArUco markers are present.
Thanks to its known geometry i.e., number and dimensions in mm of the cells forming the chessboard and number and dimensions in mm of the ArUco markers, it was possible to estimate the camera pose using this information, the camera intrinsic parameter matrix A and the camera distortion coefficient vector K, defined as:
A = f x 0 p p x 0 f y p p y 0 0 1 K = k 1 , k 2 , k 3 , k 4 , k 5 t
where: f x and f y are the focal length, p 0 = p p x , p p y t is the principal point and K values are lens radial and tangential distortion coefficients.
The camera’s pose estimation was evaluated with respect to an axis system defined on the ChArUco as shown in Figure 6. The OpenCV library was used for each acquired RGB frame to identify the ChArUco and evaluate the camera pose. Once the poses for each frame were evaluated, it was possible to register each RGBD pointcloud to each other. Specifically, the pointclouds were registered with respect to the position that the RGBD camera had once it was positioned in front of the phantom, thus defining the reference system O x y z R G B D .

2.2.2. Segmentation Algorithm

The RGBD pointcloud obtained as a result of the previous step was then segmented to uniquely identify a sub-pointcloud representing the object of interest, that is the human head mimicking phantom. An iterative segmentation algorithm based on the mutual distances between points was applied: if two 3D points p 1 = ( x 1 , y 1 , z 1 ) and p 2 = ( x 2 , y 2 , z 2 ) had an Euclidean distance d:
d = ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 + ( z 1 z 2 ) 2
less than a threshold d m i n , then they were treated as points belonging to the same object, otherwise as distinct objects.
Once the pointcloud segmentation algorithm was applied, a sub-pointcloud representing the object of interest was obtained. Since the pointcloud obtained at the end of the segmentation algorithm could contain some artefacts, each was manually removed using the open-source software MeshLab (version: 2022.02) [45]. The result obtained at the end of this procedure is shown in Figure 7.

2.2.3. Pointcloud Simplification

At the end of the segmentation algorithm, the RGBD pointcloud contained over 10.7 million 3D points. To reduce computational complexity, the pointcloud was simplified by averaging together points within an euclidean distance of less than 1 mm from each other, resulting in a single 3D point. Following this simplification process, the RGBD pointcloud was reduced to approximately 70,000 points. Reducing the number of points helps simplify subsequent calculation operations while preserving the essential information of the original data.

2.3. Registration of DICOM-Based and RGBD Pointclouds

In the following paragraphs, all the procedures applied to register the RGBD pointcloud respect to the DICOM-based one will be described in detail.

2.3.1. Preliminary Bounding-Box Based Registration Algorithm

Since the two pointclouds were expressed in their own reference system, it was necessary to apply an initial registration between the two pointclouds to express them both into the same one. In particular, an approach based on creating bounding boxes (BB) was adopted. A bounding box could be defined as the parallelepiped that encloses the pointcloud and was created using the maximum and minimum values of the pointcloud along x, y, and z axes. After evaluating the BB that encapsulates each of the two pointclouds, the translation vector T = t x , t y , t z t was computed by imposing that each vertex of the frontal face of the BB related to the DICOM-based pointcloud was translated to the corresponding vertex of the BB of the RGBD pointcloud. Once the vector T had been evaluated and applied to the DICOM-based pointcloud, both were expressed in the coordinate system of the 3D camera, namely O x y z R G B D . The result obtained at the end of this procedure is shown in Figure 8.

2.3.2. Minimizing Distance between Contours

To further improve the registration result between the two pointclouds, an algorithm based on aligning their contours has been developed. The algorithm is based on the iterative application of different steps:
(1)
Projection on a rotated plane: the reference system O x y z R G B D was rotated according to the random rotation vector R 1 = ( α , β , γ ) defining the reference system O x y z α β γ R G B D . In detail: α defines a rotation on the x z plane, β defines a rotation on the y z plane, and γ defines a rotation on the x y plane. Then, RGBD and DICOM-based pointclouds were projected onto the x y plane of the O x y z α β γ R G B D reference system, defining two 2D images. Lastly, the objects’ contours were identified by evaluating the concave hull of the images, defining the 2D points vectors c R G B and c D I C O M . R 1 vector random values were chosen in the range ± 0.5236 radians (i.e., ± 30 °).
(2)
Application of the distance transform filter: applying the distance transform filter to the contour c D I C O M , the distance matrix was created. A distance transform filter is a mathematical operation that evaluates the distance of each pixel in an image to the nearest boundary or feature in the image. The result is a new image called the distance map or distance transform map, where each pixel value represents the distance between that pixel and the nearest object boundary. Specifically, the distance transform filter implemented in the OpenCV library was used and an example of its application on a c D I C O M contour is shown in Figure 9c
(3)
Evaluation of optimized rotation vector R 2 : the distance between contours c R G B and c D I C O M was minimized based on the distance matrix image. Specifically, the Nelder-Mead method [46] minimization algorithm, implemented in the OpenCV library, was used. The parameters used for the minimization are: θ (rotation on the x y plane), t x (translation along the x axis), and t y (translation along the y axis). At the end of the minimization process, the optimized parameters vector was defined: R 2 = ( θ o p t , t x o p t , t y o p t ) . Lastly, optimal translations ( t x o p t , t y o p t ) were converted from volume-based translations to millimeter-based translations using the slice thickness parameter (Equation (1)).
(4)
Application of R 2 vector: the RGBD pointcloud was then permanently rotated and translated using values evaluated at point (3).
(5)
Stopping criterion: a total of one thousand random points are identified on the RGBD pointcloud and the average euclidean distance was calculated with respect to the DICOM-based pointcloud. At the end of this procedure, if the mean registration error E was greater than a threshold value E t h r , the entire registration process was repeated by defining a new random rotation vector R 1 ; otherwise, the registration process was terminated. In particular, a threshold value E t h r = 1 mm was used.
For better clarification, the highlights of this method are shown in Figure 9 and the flowchart diagram of the algorithm is illustrated in Figure 10.
Figure 9. Highlights of the contours-based minimization algorithm. (a) RGBD pointcloud contour c R G B after its projection on the x y plane defined by the rotation vector R 1 = ( 0 , 0 , 0 ) ; (b) DICOM-based pointcloud contour c R G B after its projection on the x y plane defined by the rotation vector R 1 = ( 0 , 0 , 0 ) ; (c) Application of the distance matrix filter to the c D I C O M contour. The colour scale from black to white means the position of a pixel closer and farther from the reference contour, respectively; (d) Minimization of the distance between c R G B and c D I C O M based on the distance matrix. c D I C O M is represented in white, the initial c R G B contour is represented in red, and the c R G B contour at the end of the minimization is represented in green.
Figure 9. Highlights of the contours-based minimization algorithm. (a) RGBD pointcloud contour c R G B after its projection on the x y plane defined by the rotation vector R 1 = ( 0 , 0 , 0 ) ; (b) DICOM-based pointcloud contour c R G B after its projection on the x y plane defined by the rotation vector R 1 = ( 0 , 0 , 0 ) ; (c) Application of the distance matrix filter to the c D I C O M contour. The colour scale from black to white means the position of a pixel closer and farther from the reference contour, respectively; (d) Minimization of the distance between c R G B and c D I C O M based on the distance matrix. c D I C O M is represented in white, the initial c R G B contour is represented in red, and the c R G B contour at the end of the minimization is represented in green.
Applsci 14 04133 g009
The result obtained at the end of the algorithm is shown in Figure 11.

2.3.3. Evaluation of Registration Error

The registration error was determined by evaluating the average error between randomly chosen points on the pointclouds. Specifically, a set of N random points on the RGB pointcloud was chosen, and for each of them, the closest point in terms of euclidean distance on the DICOM-based pointcloud was identified. Finally, the average error E N = ( E N x , E N y , E N z ) was evaluated. Specifically: E N x is the average error on the x axes, E N y is the average error on the y axes and E N z is the average error on the z axes. This procedure was repeated for increasing values of N, defining: E 100 , E 1000 , E 10 , 000 , E 20 , 000 and E 30 , 000 . Additionally, to investigate how the registration error was spatially distributed, a heatmap representing the registration error between the RGBD and the DICOM-based pointcloud was performed.

2.3.4. Texture Mapping

Aiming to apply real texture mapping to the DICOM-based pointcloud, each point was coloured based on the registered RGBD pointcloud. Specifically, the colour was assigned to be the same as that of the point in the RGBD pointcloud with the minimum euclidean distance. Finally, the coloured 3D mesh was computed using MeshLab software (version: 2022.02). The result obtained at the end of the texture mapping procedure is shown in Figure 12.

2.4. Visualization on HoloLens 2

For the visualization of the 3D mesh obtained using the previously described methods in an augmented reality experience, an application designed for execution on HoloLens 2 was developed using Unity 2020 software (version: 2020.3.48f1). This project was configured following Microsoft’s guidelines, ensuring full compatibility with augmented reality devices. Additionally, interactive features to enhance the user experience were incorporated by the Mixed Reality Toolkit (MRKT) [47]. This means that the user has complete control over the 3D model, being able to move and rotate it using hand tracking performed by the integrated cameras in the headset.
To manage the limited computational resources of the headset, we configured the application to provide a smoother user experience. This involved enabling the headset to establish a wireless connection with a desktop computer for handling resource-intensive calculations due to its hardware.
Specifically, Unity3D was used to develop a software capable of visualizing a 3D object (.obj file format). Additionally, object manipulation feature was implemented. Manipulation allows the user to move, rotate, enlarge, or shrink the object through specific manual gestures that are captured by the cameras of HoloLens 2 and interpreted using MRTK scripts for gesture recognition. In particular, these gestures include: grasping the object with the fingers and moving the hand, rotating the wrist while holding the object with the fingers, and bringing the hands closer together or moving them apart in the case of grasping with both hands. Therefore, this feature allows the user to manipulate the hologram projected into real space by the AR viewer with simple and intuitive hand movements.

3. Results

This study proposes a method for enhancing a 3D model, obtained from CBCT images, with the real external texture using an RGBD camera. The resulting outcome was then displayed using augmented reality and it was also possible to interact with it through hand gestures.
The pointcloud related to the internal tissues of the phantom was obtained by performing a CBCT scan of the same, segmenting its outer edge in each image, and then transforming all the points into a 3D pointcloud using the DICOM standard slice thickness parameter, as reported in Equation (1). The obtained result is shown in Figure 4.
The pointcloud related to the external texture of the object was obtained using an Intel D435 3D camera, registering several pointclouds using the ChArUco marker shown in Figure 5 for camera pose estimation. The obtained result is shown in Figure 7.
The first step for registering the pointclouds involved the alignment of their bounding boxes’ front faces. The result obtained is shown in Figure 8. Subsequently, the registration process used an iterative algorithm to minimize the distance between the contours of the two models obtained by applying different rotation vectors R 1 and projecting the result onto the x y plane of the 3D camera reference system O x y z R G B D .
To better understand how the registration error E varied with increasing iterations, a graph was constructed and is reported in Figure 13.
As shown in the graph, the mean registration error E starts from a value E = 1.4, then tends to decrease with increasing iterations. Specifically, the stopping condition of the minimization process ( E E t h r = 1 mm) was reached after three iterations, reaching the value E = 0.9.
At the end of the registration, the two models were correctly aligned, as shown in Figure 11. To assess the quality of the registration process, the average registration error was calculated based on different sets of randomly chosen points on the RGBD pointcloud. The obtained results are presented in Table 1.
As shown in the table, the registration error remains nearly stable regardless of the number of randomly chosen points. Starting from E 10 , 000 the average error along the three axes stabilizes at: 1.1 ± 1.3 mm for the x-axis, 0.7 ± 0.8 mm for the y-axis, and 0.9 ± 1.2 mm for the z-axis.
After evaluating the registration error, a heatmap representing the spatial distribution of the registration error between the two pointclouds was also created (Figure 14).
At the end of the alignment process, the colours of the RGBD pointcloud were applied to the DICOM-based one using a criterion based on the minimum euclidean distance. Then, MeshLab software was used to create the real coloured 3D mesh of the object (Figure 12).
Finally, a software application was developed to enable the visualization of the coloured mesh in a virtual environment using the Microsoft HoloLens 2 device. The user interaction with the model allowed for translations, rotations, and scaling using hand gestures. The augmented reality visualization of the 3D model and user interactions is illustrated in Figure 15.

4. Discussions

The first part of this study focused on the registration process between two pointclouds: the first, created using an Intel D435 3D camera, aimed to three-dimensionally reconstruct the external texture of the human head phantom used in this study, while the second, generated from DICOM images obtained from a CBCT scan of the same phantom, aimed to three-dimensionally reconstruct its internal structure.
Using bounding boxes to perform an initial alignment between the two models and the iterative algorithm for the refinement appears effective. Figure 11 confirm successful alignment and the quantitative results in Table 1 provide additional validation. The stable registration error along the three principal axes (x, y, and z) indicates the reliability of the proposed method. Additionally, values around 1 mm in both the mean and standard deviation of the registration error suggest consistent alignment performance, which is critical for correct and faithful texturing of a 3D model, especially for medical or surgical fields. The heatmap represented in Figure 14 shows how the registration error between the RGBD and the DICOM-based pointcloud was spatially distributed, at the end of the registration phases, on the RGBD pointcloud. It can be observed how the pointcloud is composed overwhelmingly of correctly registered points (red points), but also how there are areas, especially concerning the nose, ears, and the surrounding area, where the registration error tends to increase, reaching values even higher than 5 mm (green points). The presence of these areas could be due to an incorrect estimation of the spatial position of the points by the RGBD camera. In particular, this could be caused by both the phantom used that, presenting a very homogeneous external texture, can cause an incorrect triangulation of the points by the 3D camera, and by possible light reflections on the phantom during the acquisition and registration phase of the RGBD pointclouds. Nevertheless, the registration error statistics shown in Figure 14 demonstrate that the majority of points are very close to the DICOM-based pointcloud, resulting in an average registration error tending to around 1 mm but characterized by standard deviation values always greater than the mean (see Table 1). Finally, the values reported in the table, averaging 1 mm, align with other studies where RGBD systems were tested. A study by Kilgus et al. [48] utilized a Microsoft Kinect device to gather RGBD data, aiming to enhance a CT model for forensic medicine. Their method of acquiring RGBD pointclouds mirrored the approach described in this study, involving manual movement of the Kinect around the target. The mean target registration error was measured at 4.4 ± 1.3 mm. Additionally, studies where the RGBD system was rigidly mounted to the CBCT machine [32,49] were conducted. The first study employed an Intel RealSense F200 RGBD camera to introduce a calibration method based on the simultaneous reconstruction of an object’s surface and CBCT scan, achieving an accuracy of 2.58 mm. The second study proposed an approach based on a monocular camera, 3D-2D feature mapping, and surface parameterization technology for texture surface reconstruction, reporting an accuracy of 0.32 mm. Despite the interesting results from these studies, employing a rigid mounting system between the RGBD system and CBCT machine significantly limits its general applicability. Replicating the position and methodology across all CBCT machines is not guaranteed. A methodology like the one described in this study could help address this limitation: relying on manual movement of the RGBD system ensures a low registration error while maintaining flexibility.
The second part of the study focused on the visualization and interaction of the coloured 3D mesh, created with MeshLab software, in an augmented reality environment through the Microsoft HoloLens 2 head-mount display. The development of a software application for visualization and interaction on the Microsoft HoloLens 2 device, showcases the practical application of the proposed method. Natural hand gestures for manipulation and scaling, contribute to a more immersive and user-friendly experience.
Since the described pointcloud processing and registration method does not rely on specific tools or binding techniques, it could be used and replicated with medical images acquired from other types of medical exams like magnetic resonance or standard computed tomography and with different external texture acquisition techniques like photogrammetry or structured light cameras.
Visualization of a realistic textured 3D model reconstructed from medical images could provide doctors and surgeons with highly clinically relevant information, especially in areas where visual concept and patient aesthetics maintenance are crucial, such as maxillofacial surgery or dentistry. Another area of application is during image-guided surgical planning stages, where accurate and careful planning of skin entry points and trajectories is crucial for a correct outcome. Additionally, the increasing introduction of technologies like augmented, mixed, and virtual reality in the medical field holds promise for the future of these technologies. Their potential is undeniable and could allow physicians and surgeons to perform image analysis or surgical planning phases innovatively and user-friendly through hand-gesture integration.
Despite the shown results, this study has some limitations. The use of the ChArUco marker for determining the camera’s position, placed behind the human-head mimicking phantom, limits the range of movement with the RGBD camera. Specifically, to ensure precise registration of the pointcloud, the marker needs to be within the camera’s field of view, which can make capturing images of the nuchal region difficult. To overcome this challenge, incorporating two or more strategically positioned ChArUco markers within the scene could be a solution. Despite most of the points in the RGBD pointcloud being correctly registered (Figure 14), there are still areas where registration occurs incorrectly, especially in the nose and ear regions. Choosing a phantom with less uniform external texture and ensuring careful illumination during the RGBD pointcloud acquisition phases could help overcome this issue. The pointcloud alignment technique described in this study was tested on the phantom shown in Figure 1. Replicating this methodology on multiple phantoms with different surface characteristics could be useful for highlighting the potential and limitations of this technique. Finally, the phantom (Figure 1) was hollow inside, thus lacking any structure to simulate internal tissues. The applicability of the method proposed in this study remains valid, as it is based on the segmentation of the outer contour of the CBCT-scanned object, but future studies could test it on phantoms specifically designed to be scanned with radiographic techniques. This further step forward could also enable interaction with such internal structures, even in the AR visualization phase.

5. Conclusions

  • This study successfully achieves its objective of enhancing a 3D model reconstructed from DICOM images with real external texture using an RGBD camera.
  • The proposed method demonstrates reliable pointclouds registration, accurately aligning internal tissues related pointcloud and external textures with a mean registration error < 1 mm.
  • The stable registration error across different sets of randomly chosen points reflects the method’s robustness, reaching an average error of 1.1 ± 1.3 mm for the x-axis, 0.7 ± 0.8 mm for the y-axis, and 0.9 ± 1.2 mm for the z-axis.
  • The presented mixed reality visualization using the Microsoft HoloLens 2 device allows users to explore and manipulate the colourized 3D model with intuitive gestures and voice commands, showcasing the potential for practical applications such as surgical planning and medical imaging visualization.
  • Since the pointcloud processing and registration method described does not rely on particular tools and predetermined RGBD camera setups or positions, it has the potential to be reproduced across other medical imaging modalities, including MRI or traditional CT scans, as well as different external texture capture techniques like photogrammetry or structured light cameras.

Author Contributions

Conceptualization, C.A., E.R., V.S., F.V. and L.B.; Data curation, C.A., A.M. and E.R.; Formal analysis, C.A.; Methodology, C.A., A.M., E.R., V.S. and F.V.; Project administration, L.B.; Software, C.A., A.M., E.R., S.L. and V.Y.C.; Supervision, L.B.; Writing—original draft, C.A. and E.R.; Writing—review & editing, C.A., A.M., E.R., S.L., V.Y.C., V.S. and F.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Jud, L.; Fotouhi, J.; Andronic, O.; Aichmair, A.; Osgood, G.; Navab, N.; Farshad, M. Applicability of augmented reality in orthopedic surgery—A systematic review. BMC Musculoskelet. Disord. 2020, 21, 103. [Google Scholar] [CrossRef] [PubMed]
  2. Lu, L.; Wang, H.; Liu, P.; Liu, R.; Zhang, J.; Xie, Y.; Liu, S.; Huo, T.; Xie, M.; Wu, X.; et al. Applications of mixed reality technology in orthopedics surgery: A pilot study. Front. Bioeng. Biotechnol. 2022, 10, 740507. [Google Scholar] [CrossRef] [PubMed]
  3. Verhey, J.T.; Haglin, J.M.; Verhey, E.M.; Hartigan, D.E. Virtual, augmented, and mixed reality applications in orthopedic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, e2067. [Google Scholar] [CrossRef]
  4. Chytas, D.; Nikolaou, V.S. Mixed reality for visualization of orthopedic surgical anatomy. World J. Orthop. 2021, 12, 727. [Google Scholar] [CrossRef]
  5. Yang, J.; Zhang, J.; Zeng, C.; Fang, Y.; Xue, M.; Wang, H.; Zhou, H.; Xie, Y.; Liu, P.; Ye, Z. Application and prospect of mixed reality technology in orthopedics. Digit. Med. 2023, 9, e00010. [Google Scholar] [CrossRef]
  6. Bian, D.; Lin, Z.; Lu, H.; Zhong, Q.; Wang, K.; Tang, X.; Zang, J. The Application of Extended Reality Technology-Assisted Intraoperative Navigation in Orthopedic Surgery. Front. Surg. 2024, 11, 1336703. [Google Scholar] [CrossRef] [PubMed]
  7. Li, W.; Liu, Y.; Wang, Y.; Zhang, X.; Liu, K.; Jiao, Y.; Zhang, X.; Chen, J.; Zhang, T. Educational value of mixed reality combined with a three-dimensional printed model of aortic disease for vascular surgery in the standardized residency training of surgical residents in China: A case control study. BMC Med. Educ. 2023, 23, 812. [Google Scholar] [CrossRef] [PubMed]
  8. Eves, J.; Sudarsanam, A.; Shalhoub, J.; Amiras, D. Augmented reality in vascular and endovascular surgery: Scoping review. JMIR Serious Games 2022, 10, e34501. [Google Scholar] [CrossRef] [PubMed]
  9. Lareyre, F.; Chaudhuri, A.; Adam, C.; Carrier, M.; Mialhe, C.; Raffort, J. Applications of head-mounted displays and smart glasses in vascular surgery. Ann. Vasc. Surg. 2021, 75, 497–512. [Google Scholar] [CrossRef]
  10. Nicolau, S.; Soler, L.; Mutter, D.; Marescaux, J. Augmented reality in laparoscopic surgical oncology. Surg. Oncol. 2011, 20, 189–201. [Google Scholar] [CrossRef]
  11. Wong, K.C.; Sun, Y.E.; Kumta, S.M. Review and future/potential application of mixed reality technology in orthopaedic oncology. Orthop. Res. Rev. 2022, 14, 169–186. [Google Scholar] [CrossRef] [PubMed]
  12. Wong, K.C.; Sun, E.Y.; Wong, I.O.L.; Kumta, S.M. Mixed reality improves 3D visualization and spatial awareness of bone tumors for surgical planning in orthopaedic oncology: A proof of concept study. Orthop. Res. Rev. 2023, 15, 139–149. [Google Scholar] [CrossRef] [PubMed]
  13. Jain, S.; Gao, Y.; Yeo, T.T.; Ngiam, K.Y. Use of Mixed Reality in Neuro-Oncology: A Single Centre Experience. Life 2023, 13, 398. [Google Scholar] [CrossRef] [PubMed]
  14. Tang, Z.N.; Hu, L.H.; Soh, H.Y.; Yu, Y.; Zhang, W.B.; Peng, X. Accuracy of mixed reality combined with surgical navigation assisted oral and maxillofacial tumor resection. Front. Oncol. 2022, 11, 715484. [Google Scholar] [CrossRef] [PubMed]
  15. Campisi, B.M.; Costanzo, R.; Gulino, V.; Avallone, C.; Noto, M.; Bonosi, L.; Brunasso, L.; Scalia, G.; Iacopino, D.G.; Maugeri, R. The Role of Augmented Reality Neuronavigation in Transsphenoidal Surgery: A Systematic Review. Brain Sci. 2023, 13, 1695. [Google Scholar] [CrossRef] [PubMed]
  16. Koike, T.; Kin, T.; Tanaka, S.; Takeda, Y.; Uchikawa, H.; Shiode, T.; Saito, T.; Takami, H.; Takayanagi, S.; Mukasa, A.; et al. Development of innovative neurosurgical operation support method using mixed-reality computer graphics. World Neurosurg. X 2021, 11, 100102. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, C.; Gao, H.; Liu, Z.; Huang, H. The potential value of mixed reality in neurosurgery. J. Craniofacial Surg. 2021, 32, 940–943. [Google Scholar] [CrossRef] [PubMed]
  18. Drouin, S.; Kersten-Oertel, M.; Chen, S.J.S.; Collins, D.L. A realistic test and development environment for mixed reality in neurosurgery. In Augmented Environments for Computer-Assisted Interventions: Proceedings of the 6th International Workshop, AE-CAI 2011, Held in Conjunction with MICCAI 2011, Toronto, ON, Canada, 22 September 2011; Revised Selected Papers 6; Springer: Berlin/Heidelberg, Germany, 2012; pp. 13–23. [Google Scholar]
  19. Pepe, A.; Trotta, G.F.; Mohr-Ziak, P.; Gsaxner, C.; Wallner, J.; Bevilacqua, V.; Egger, J. A marker-less registration approach for mixed reality–aided maxillofacial surgery: A pilot evaluation. J. Digit. Imaging 2019, 32, 1008–1018. [Google Scholar] [CrossRef]
  20. Kim, Y.C.; Park, C.U.; Lee, S.J.; Jeong, W.S.; Na, S.W.; Choi, J.W. Application of augmented reality using automatic markerless registration for facial plastic and reconstructive surgery. J. Cranio-Maxillofac. Surg. 2024, 52, 246–251. [Google Scholar] [CrossRef]
  21. Yang, R.; Li, C.; Tu, P.; Ahmed, A.; Ji, T.; Chen, X. Development and application of digital maxillofacial surgery system based on mixed reality technology. Front. Surg. 2022, 8, 719985. [Google Scholar] [CrossRef]
  22. Brunzini, A.; Mazzoli, A.; Pagnoni, M.; Mandolini, M. An innovative mixed reality approach for maxillofacial osteotomies and repositioning. Virtual Real. 2023, 27, 3221–3237. [Google Scholar] [CrossRef]
  23. Moro, C.; Phelps, C.; Redmond, P.; Stromberga, Z. HoloLens and mobile augmented reality in medical and health science education: A randomised controlled trial. Br. J. Educ. Technol. 2021, 52, 680–694. [Google Scholar] [CrossRef]
  24. Dhar, P.; Rocks, T.; Samarasinghe, R.M.; Stephenson, G.; Smith, C. Augmented reality in medical education: Students’ experiences and learning outcomes. Med. Educ. Online 2021, 26, 1953953. [Google Scholar] [CrossRef] [PubMed]
  25. Gerup, J.; Soerensen, C.B.; Dieckmann, P. Augmented reality and mixed reality for healthcare education beyond surgery: An integrative review. Int. J. Med. Educ. 2020, 11, 1. [Google Scholar] [CrossRef]
  26. Lungu, A.J.; Swinkels, W.; Claesen, L.; Tu, P.; Egger, J.; Chen, X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: An extension to different kinds of surgery. Expert Rev. Med. Devices 2021, 18, 47–62. [Google Scholar] [CrossRef] [PubMed]
  27. Goh, G.S.; Lohre, R.; Parvizi, J.; Goel, D.P. Virtual and augmented reality for surgical training and simulation in knee arthroplasty. Arch. Orthop. Trauma Surg. 2021, 141, 2303–2312. [Google Scholar] [CrossRef]
  28. Lee, G.K.; Moshrefi, S.; Fuertes, V.; Veeravagu, L.; Nazerali, R.; Lin, S.J. What is your reality? Virtual, augmented, and mixed reality in plastic surgery training, education, and practice. Plast. Reconstr. Surg. 2021, 147, 505–511. [Google Scholar] [CrossRef] [PubMed]
  29. Sánchez-Margallo, J.A.; Plaza de Miguel, C.; Fernández Anzules, R.A.; Sánchez-Margallo, F.M. Application of mixed reality in medical training and surgical planning focused on minimally invasive surgery. Front. Virtual Real. 2021, 2, 144. [Google Scholar] [CrossRef]
  30. Casas-Yrurzum, S.; Gimeno, J.; Casanova-Salas, P.; García-Pereira, I.; García del Olmo, E.; Salvador, A.; Guijarro, R.; Zaragoza, C.; Fernández, M. A new mixed reality tool for training in minimally invasive robotic-assisted surgery. Health Inf. Sci. Syst. 2023, 11, 34. [Google Scholar] [CrossRef]
  31. Xiao, D.; Lian, C.; Deng, H.; Kuang, T.; Liu, Q.; Ma, L.; Kim, D.; Lang, Y.; Chen, X.; Gateno, J.; et al. Estimating reference bony shape models for orthognathic surgical planning using 3D point-cloud deep learning. IEEE J. Biomed. Health Inform. 2021, 25, 2958–2966. [Google Scholar] [CrossRef]
  32. Lin, Q.; Xiongbo, G.; Zhang, W.; Cai, L.; Yang, R.; Chen, H.; Cai, K. A novel approach of surface texture mapping for cone-beam computed tomography in image-guided surgical navigation. IEEE J. Biomed. Health Inform. 2023. [Google Scholar] [CrossRef] [PubMed]
  33. Singh, P.; Bornstein, M.M.; Hsung, R.T.C.; Ajmera, D.H.; Leung, Y.Y.; Gu, M. Frontiers in Three-Dimensional Surface Imaging Systems for 3D Face Acquisition in Craniofacial Research and Practice: An Updated Literature Review. Diagnostics 2024, 14, 423. [Google Scholar] [CrossRef] [PubMed]
  34. Kurillo, G.; Hemingway, E.; Cheng, M.L.; Cheng, L. Evaluating the accuracy of the azure kinect and kinect v2. Sensors 2022, 22, 2469. [Google Scholar] [CrossRef] [PubMed]
  35. Yoshimoto, K.; Shinya, M. Use of the Azure Kinect to measure foot clearance during obstacle crossing: A validation study. PLoS ONE 2022, 17, e0265215. [Google Scholar] [CrossRef] [PubMed]
  36. Cho, R.Y.; Byun, S.H.; Yi, S.M.; Ahn, H.J.; Nam, Y.S.; Park, I.Y.; On, S.W.; Kim, J.C.; Yang, B.E. Comparative Analysis of Three Facial Scanners for Creating Digital Twins by Focusing on the Difference in Scanning Method. Bioengineering 2023, 10, 545. [Google Scholar] [CrossRef]
  37. QtProject. QtCreator Version: 5.12.6. 2019. Available online: https://www.qt.io/product/development-tools (accessed on 20 February 2024).
  38. Intel; Santa Clara, California, USA. 2018. Available online: https://www.intelrealsense.com/depth-camera-d435/ (accessed on 20 February 2024).
  39. Unity Technologies; San Francisco, California, USA. 2005. Available online: https://unity.com/ (accessed on 22 February 2024).
  40. Microsoft; Redmond, Washington, USA. 2019. Available online: https://www.microsoft.com/en-us/hololens/ (accessed on 20 February 2024).
  41. Itseez. Open Source Computer Vision Library. 2015. Available online: https://github.com/itseez/opencv (accessed on 20 February 2024).
  42. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  43. Romero-Ramirez, F.J.; Muñoz-Salinas, R.; Medina-Carnicer, R. Speeded up detection of squared fiducial markers. Image Vis. Comput. 2018, 76, 38–47. [Google Scholar] [CrossRef]
  44. Garrido-Jurado, S.; Munoz-Salinas, R.; Madrid-Cuevas, F.J.; Medina-Carnicer, R. Generation of fiducial marker dictionaries using mixed integer linear programming. Pattern Recognit. 2016, 51, 481–491. [Google Scholar] [CrossRef]
  45. Cignoni, P.; Callieri, M.; Corsini, M.; Dellepiane, M.; Ganovelli, F.; Ranzuglia, G. MeshLab: An Open-Source Mesh Processing Tool. In Proceedings of the Eurographics Italian Chapter Conference; Scarano, V., Chiara, R.D., Erra, U., Eds.; The Eurographics Association: Eindhoven, The Netherlands, 2008. [Google Scholar] [CrossRef]
  46. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  47. Microsoft. Mixed Reality Toolkit. 2016. Available online: https://github.com/MixedRealityToolkit/ (accessed on 28 February 2024).
  48. Kilgus, T.; Heim, E.; Haase, S.; Prüfer, S.; Müller, M.; Seitel, A.; Fangerau, M.; Wiebe, T.; Iszatt, J.; Schlemmer, H.P.; et al. Mobile markerless augmented reality and its application in forensic medicine. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 573–586. [Google Scholar] [CrossRef]
  49. Lee, S.C.; Fuerst, B.; Fotouhi, J.; Fischer, M.; Osgood, G.; Navab, N. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 967–975. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Human head mimicking phantom.
Figure 1. Human head mimicking phantom.
Applsci 14 04133 g001
Figure 2. Conceptual diagram summarizing the main adopted steps.
Figure 2. Conceptual diagram summarizing the main adopted steps.
Applsci 14 04133 g002
Figure 3. Application of the Otsu’s method for the outer contour detection: the left image shows the original DICOM image while the right one shows the result obtained by applying the Otsu’s thresholding method.
Figure 3. Application of the Otsu’s method for the outer contour detection: the left image shows the original DICOM image while the right one shows the result obtained by applying the Otsu’s thresholding method.
Applsci 14 04133 g003
Figure 4. Pointcloud obtained from the DICOM images processing.
Figure 4. Pointcloud obtained from the DICOM images processing.
Applsci 14 04133 g004
Figure 5. ChArUco board used as a fiducial marker for RGBD camera pose estimation. The side length of each square on the board is 15 mm, while the side length of each ArUco marker is 10 mm.
Figure 5. ChArUco board used as a fiducial marker for RGBD camera pose estimation. The side length of each square on the board is 15 mm, while the side length of each ArUco marker is 10 mm.
Applsci 14 04133 g005
Figure 6. Reference system of the ChArUco board.
Figure 6. Reference system of the ChArUco board.
Applsci 14 04133 g006
Figure 7. Pointcloud obtained after the registration and segmentation processes of RGBD pointclouds.
Figure 7. Pointcloud obtained after the registration and segmentation processes of RGBD pointclouds.
Applsci 14 04133 g007
Figure 8. Result of the initial BB-based registration process. The bounding boxes of the two registered pointclouds are represented in white.
Figure 8. Result of the initial BB-based registration process. The bounding boxes of the two registered pointclouds are represented in white.
Applsci 14 04133 g008
Figure 10. Flowchart diagram of the contours-based minimization algorithm.
Figure 10. Flowchart diagram of the contours-based minimization algorithm.
Applsci 14 04133 g010
Figure 11. Result of the registration between RGBD and DICOM-based pointclouds.
Figure 11. Result of the registration between RGBD and DICOM-based pointclouds.
Applsci 14 04133 g011
Figure 12. Mesh obtained at the end of the texture mapping application from RGBD to DICOM-based pointcloud.
Figure 12. Mesh obtained at the end of the texture mapping application from RGBD to DICOM-based pointcloud.
Applsci 14 04133 g012
Figure 13. Plot representing the mean registration error E, calculated over a thousand random points, as a function of iterations. The error reported at iteration number zero represents the mean error obtained at the end of the preliminary registration based on bounding boxes. In red, instead, the error E t h r = 1 mm is represented, chosen as the stopping criterion for the minimization process.
Figure 13. Plot representing the mean registration error E, calculated over a thousand random points, as a function of iterations. The error reported at iteration number zero represents the mean error obtained at the end of the preliminary registration based on bounding boxes. In red, instead, the error E t h r = 1 mm is represented, chosen as the stopping criterion for the minimization process.
Applsci 14 04133 g013
Figure 14. Heatmap representing how the registration error between the RGBD and the DICOM-based pointcloud is spatially distributed on the RGBD pointcloud. In the left part of the image, the colour scale is represented according to the registration error in millimetres: it ranges from red, representing areas with minimal error, to green/blue, representing areas with higher error.
Figure 14. Heatmap representing how the registration error between the RGBD and the DICOM-based pointcloud is spatially distributed on the RGBD pointcloud. In the left part of the image, the colour scale is represented according to the registration error in millimetres: it ranges from red, representing areas with minimal error, to green/blue, representing areas with higher error.
Applsci 14 04133 g014
Figure 15. Different types of object manipulation and interactions using hand gestures. The white lines represent the user’s hand raycasts and allow HoloLens 2 to interact with the digital object when the user’s hands are not in direct contact with the digital object. (a) One-handed interaction with the digital object: the user can rotate and translate the object using the corresponding hand movement; (b) One-handed interaction with the digital object: the user can rotate the object by selecting one side of the box and then moving its hand; (c) One-handed interaction with the digital object: the user can scale the object by selecting one vertex of the box and then moving its hand; (d) Two-handed interaction with the digital object: the user can translate and scale the object using intuitive hand movements.
Figure 15. Different types of object manipulation and interactions using hand gestures. The white lines represent the user’s hand raycasts and allow HoloLens 2 to interact with the digital object when the user’s hands are not in direct contact with the digital object. (a) One-handed interaction with the digital object: the user can rotate and translate the object using the corresponding hand movement; (b) One-handed interaction with the digital object: the user can rotate the object by selecting one side of the box and then moving its hand; (c) One-handed interaction with the digital object: the user can scale the object by selecting one vertex of the box and then moving its hand; (d) Two-handed interaction with the digital object: the user can translate and scale the object using intuitive hand movements.
Applsci 14 04133 g015aApplsci 14 04133 g015b
Table 1. Registration errors on each principal axis for different numbers of random points. E x represents the error with respect to the x axis, E y represents the error with respect to the y axis, and E z represents the error with respect to the z axis. Each value is reported as the mean value and standard deviation.
Table 1. Registration errors on each principal axis for different numbers of random points. E x represents the error with respect to the x axis, E y represents the error with respect to the y axis, and E z represents the error with respect to the z axis. Each value is reported as the mean value and standard deviation.
E x [mm] E y [mm] E z [mm]
E 100 1.1 ± 1.30.5 ± 0.60.8 ± 1
E 1000 1.1 ± 1.20.6 ± 0.80.9 ± 1.2
E 10 , 000 1.1 ± 1.30.7 ± 0.80.9 ± 1.2
E 20 , 000 1.1 ± 1.30.7 ± 0.80.9 ± 1.2
E 30 , 000 1.1 ± 1.30.7 ± 0.80.9 ± 1.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aliani, C.; Morelli, A.; Rossi, E.; Lombardi, S.; Civale, V.Y.; Sardini, V.; Verdino, F.; Bocchi, L. Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications. Appl. Sci. 2024, 14, 4133. https://doi.org/10.3390/app14104133

AMA Style

Aliani C, Morelli A, Rossi E, Lombardi S, Civale VY, Sardini V, Verdino F, Bocchi L. Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications. Applied Sciences. 2024; 14(10):4133. https://doi.org/10.3390/app14104133

Chicago/Turabian Style

Aliani, Cosimo, Alberto Morelli, Eva Rossi, Sara Lombardi, Vincenzo Yuto Civale, Vittoria Sardini, Flavio Verdino, and Leonardo Bocchi. 2024. "Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications" Applied Sciences 14, no. 10: 4133. https://doi.org/10.3390/app14104133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop