Next Article in Journal
On the Local and String Stability Analysis of Traffic Collision Risk
Next Article in Special Issue
Indoor AR Navigation Framework Based on Geofencing and Image-Tracking with Accumulated Error Correction
Previous Article in Journal
Hydrodynamic Analysis-Based Modeling of Coastal Abrasion Prevention (Case Study: Pulau Baai Port, Bengkulu)
Previous Article in Special Issue
Evaluating Physical Stress across Task Difficulty Levels in Augmented Reality-Assisted Industrial Maintenance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method Using 3D Interest Points to Place Markers on a Large Object in Augmented Reality

1
BioComputing Lab, Department of Computer Science and Engineering, Korea University of Technology and Education (KOREATECH), Cheonan 31253, Republic of Korea
2
BioComputing Lab, Institute for Bio-Engineering Application Technology, Department of Computer Science and Engineering, Korea University of Technology and Education (KOREATECH), Cheonan 31253, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(2), 941; https://doi.org/10.3390/app14020941
Submission received: 29 December 2023 / Revised: 14 January 2024 / Accepted: 19 January 2024 / Published: 22 January 2024
(This article belongs to the Special Issue Virtual/Augmented Reality and Its Applications)

Abstract

:
Multiple markers are generally used in augmented reality (AR) applications that require accurate registration, such as medical and industrial fields. In AR using these markers, there are two inevitable problems: (1) geometric shape discrepancies between a real object and a virtual object, and (2) the relative positions of the markers placed on the virtual object and markers placed on the real object are not consistent. However, studies on applying multiple markers to a large object are still insufficient. Additionally, most studies did not consider these inevitable problems because the markers were subjectively placed (hereafter conventional method). In consideration of these problems, this paper proposes a method for placing multiple markers to provide accurate registration on a large object. The proposed method divides a virtual object evenly and determines the positions of multiple markers automatically using 3D interest points within the divided areas. The proposed method was validated through a performance comparison with the conventional method of subjectively placing markers, and it was confirmed to have more accurate registration.

1. Introduction

Augmented reality (AR) is a computer graphics technology that (1) combines a virtual image with a real environment, (2) enables real-time interaction, and (3) registers a virtual object in a three-dimensional space [1]. The technology to register an augmented virtual object to an object in the real environment (hereafter real object) is not only a core technology of AR but also an index for ensuring performance [2].
Registration is conducted by using markers to combine the real world and the virtual world (hereafter marker-based AR), or by using features between the real and virtual world through sensors (hereafter markerless AR). A marker generally refers to an object that can be identified and tracked through digital devices, such as camera sensors. There are various types of markers, such as simple flat images [3,4,5,6], infrared markers [7,8], and optical markers [9,10]. Markers are used as reference points for registration not only in marker-based AR but also in markerless AR. For AR applications where accurate registration is essential, such as in medical and industrial fields, marker-based registration is still required [11,12].
If the target is a large object for registration in AR using markers (e.g., vehicles, robots, and factory machines), the registration error of the marker can be increased according to the scale of the object. This error can be complemented by using more than one marker (hereafter multiple markers). This is because other markers can correct errors that may arise in a certain marker, thereby providing more accurate registration [13,14,15].
In AR using these markers, there are two inevitable problems: (1) geometric shape discrepancies between a real object and a virtual object (hereafter geometric discrepancy problem), and (2) the relative positions of markers placed on the virtual object and markers placed on the real object are not consistent (hereafter relative position consistency problem).
The geometric discrepancy problem arises from the issue that the virtual objects designed by 3D/CAD/CAM engineers are not 1:1 identical to the real objects printed out into the real world. This is because errors inevitably occur during the production process due to the state of the machinery, environment, materials, costs, etc. As it is impossible to solve these errors, acceptable errors known as tolerance are generally applied. Registration errors owing to this problem can be reduced by placing multiple markers evenly on the surface of the object.
The relative position consistency problem arises from the issue that, when the marker (A) is placed on the real object based on the position of marker (B) that is placed on the virtual object, there is no guarantee that A and B have relatively consistent positions. This is because the process of referencing A (e.g., subjective interpretation by human or image processing result by machine) may be inaccurate, and errors may occur in the process of placing markers. Registration errors owing to this problem can be reduced by placing multiple markers in clearly identifiable positions.
However, studies on applying multiple markers to a large object are still insufficient [5,14,15]. Boonbrahm et al. (2020) proposed a method to reliably augment a large object using multiple markers for remote collaboration [5]. In the proposed method, five markers were used to augment a large object, but the registration aspect was not considered because it was augmented in an actual box that was not related to the actual object to be registered. Bruno et al. (2020) developed and proposed an AR system based on multiple markers. In the proposed method, multiple markers were placed 1.5 m apart around the perimeter of a large object. However, in this study, the registration was performed again each time a marker was tracked, so it was considered a single marker-based registration [14]. Sheng et al. (2023) proposed a method for 3D scanning actual a large object using AR head-mounted display. Although a virtual object was not augmented in the proposed method, multiple markers were used for stable scanning. The markers were placed around the edge of the actual object, and the distance between them was set to about 25 cm [15].
Additionally, marker placement criteria, such as the number of markers used and positions of markers, were arbitrarily selected in most studies; therefore, these inevitable problems were not considered. This means that, even if previous studies are replicated, it is difficult to obtain the same results (e.g., accuracy of registration) as previous studies because of human subjectivity in the process of placing markers.
In consideration of the above-mentioned problems, this paper proposes a method for placing multiple markers to provide accurate registration on a large object. The proposed method (1) divides a virtual object evenly considering the geometric discrepancy problem, (2) uses identifiable positions based on 3D interest points considering the relative position consistency problem, and (3) automatically determines the positions of multiple markers. This paper is organized as follows. Section 2 describes the proposed method in detail. Section 3 presents two experiments conducted to validate the proposed method, along with discussions, followed by conclusions in Section 4.

2. Proposed Method for Placing Multiple Markers

The proposed method consists of a preprocessing step (Section 2.1) to remove unnecessary geometric information from the mesh of the virtual object, and a scoring step (Section 2.2) to calculate scores that indicate which positions on the surface of the mesh are suitable for marker placement. In the proposed method, the position of the face among the components of the virtual object mesh is considered as a candidate for marker placement. This is because a surface is required to attach the marker directly to the real object in AR, as shown in Figure 1.

2.1. Preprocessing Step

This step converts the surface of the mesh precisely to consider various candidates and remove unnecessary candidates. This step is performed through the following processes: (1) upscaling to precisely convert the surface of the mesh, (2) eliminating the inner surfaces to remove unnecessary candidates, and (3) applying a region of interest (ROI) to remove candidates that are difficult to attach markers to or observe.

2.1.1. Upscaling

The surface of a precise mesh is composed of numerous small faces (hereafter the face is denoted as f, the faces are denoted as f′s, and the set of faces is denoted as F). Considering this, f is sliced so that every edge is smaller than the threshold ε (hereafter the edge is denoted as e, and the set of edges is denoted as E). In this paper, 1% of the mesh scale expressed in the norm was arbitrarily used for the ε value to be sufficiently precise relative to the overall mesh scale, as expressed in Equation (1).
ε = max X m i n X 2 + max Y m i n Y 2 + max Z m i n Z 2 100
where X denotes x   |   x   v a l u e s   o f   v e r t i c e s , Y denotes y   |   y   v a l u e s   o f   v e r t i c e s , Z denotes z   |   z   v a l u e s   o f   v e r t i c e s , max is a function that finds the maximum value in a set, and min is a function that finds the minimum value in a set.
If some of the e constituting a specific i th f (hereafter f i ) is longer than ε , the existing f i is sliced into four similar triangles using the triangle mid-segment theorem. Subsequently, new vertices and f′s are added accordingly (hereafter the vertex is denoted as v, the vertices are denoted as v′s, and the set of vertices is denoted as V). This operation is repeated for each newly added f, and then duplicate v′s are removed. Figure 2 shows an example of this process performed on a car model, which is a specific virtual large object.
Figure 2a, where this process is not applied, is not precise enough to recognize the shapes of f′s on the surface (e.g., the glass part). On the other hand, Figure 2b, where this process is applied, is shown to have a surface so precise that it is difficult to confirm whether it is composed of f′s.

2.1.2. Eliminating Inner Faces

A virtual large object may be a combination of more than two small- or medium-sized objects. For example, if a virtual large object is a car, it might be a combination of the following objects: side outer, fender, handle, tire, etc. In this case, since the marker must be attached to the outer surface of the real object, the inner f′s are excluded from the candidates.
When rays are emitted from the center of f i and its constituent v′s with its normal direction, if one is hit by other f′s, f i is determined to be an inner surface; otherwise, it is determined to be an outer surface. Figure 3 shows an example for determining whether the surfaces are inner or outer. The f′s determined to be the inner surface are removed because they are not suitable positions for placing markers, and then unused v′s are removed accordingly.

2.1.3. Applying ROI

If the marker is placed too high or too low, it is difficult to observe through AR devices. In consideration of this, a person′s knee height (hereafter h k n e e ) to eye height (here after h e y e ) is set as the ROI. The h k n e e and h e y e are the median measurements of males and females of ages 20s–50s from the Korean anthropometric survey conducted by the Korean Agency for Technology and Standards (Table 1 and Table 2).
If the vertical value (generally y) of all v′s constituting f i is within the ROI, f i is maintained. Otherwise, f i is removed and new fs are created using v′s within the ROI and positions ( p ,   p ) at the boundary, as shown in Figure 4a. The creation of new f′s depends on the number of v′s outside the ROI, as shown in Figure 4b. After removing the unused v′s, the preprocessing is complete.

2.2. Scoring Step

In this step, the score for each candidate is calculated to determine which of all candidates of the preprocessed mesh is suitable as a position for the marker placement (hereafter MPS score, Marker Placement Suitability score). The proposed method automatically places the marker in the candidate with the highest calculated score. Based on the conditions for reducing the registration error due to the inevitable problems mentioned in the introduction, two hypotheses were considered for the calculation of the score.
Hypothesis 1.
From each area of an evenly divided virtual object, the face closest to the center of the area (CoA) would be the most suitable candidate. This is because the closer the markers are to the CoA, the more evenly they are placed.
Hypothesis 2.
Face with more interest points around it would be the most suitable candidate. This is because the more interest points around it, the easier it is to identify.
This step is performed through the following processes: (1) centrality computation of segmented area representing scores for candidates based on hypothesis 1, (2) density computation of interest points representing scores for candidates based on hypothesis 2, and (3) calculation of the MPS scores by considering these comprehensively.

2.2.1. Centrality of Segmented Area

There are various conventional methods for segmenting meshes, but most of the purposes are to classify them into meaningful parts as [16,17,18]. Among these segmentation methods, the main purpose of K-means clustering is to divide into k parts rather than to classify into meaningful parts [19,20]. Therefore, this paper considers K-means clustering as a method for segmenting meshes.
To compensate for the randomness of this method, K-means++ is used. This method ensures that the centers of the initial clusters are the farthest away from each other, resulting in a more even division. Clustering is performed using V, and the cluster of specific f i is determined to be the highest percentage cluster in the three v′s that constitutes it. Figure 5 shows the segmentation result using K-Means++ after preprocessing on a specific virtual large object (a car model).
From Hypothesis 1, the specific i th f in the j th area among the segmented areas (hereafter this f is denoted as f j , i ) is placed more evenly as it is closer to the j th CoA, making it an appropriate candidate. But the f′s in the j th area (hereafter the set of these f′s is denoted as F j ) have three-dimensional geometric information, the mean position of F j may not represent the CoA. For example, if F j is shaped like a sphere convexly protruding from a circular plane, as shown in Figure 6, the CoA is expected to be the top of the sphere. However, the mean position of F j is located below the center of the sphere. As a result, the f closest to this position is not the closest to the expected CoA.
To solve this problem, the dimensionality of F j is reduced to R 2 using principal component analysis. Dimension reduction is performed by excluding the smallest eigenvector among eigenvectors for V. Based on this, the CoA is defined as the mean position of F j = f j , i | f j , i R 2 ,   1 i n ( F j ) , expressed as Equation (2).
C o A F j = 1 n i = 1 n p o s i t i o n ( f j , i )
where n denotes n ( F j ) , f j , i denotes f j , i dimensionally reduced to R 2 , and position is a function that obtains the position of f j , i .
The farther f j , i is from C o A F j , the less suitable it is as a candidate, and the closer it is, the more suitable it is as a candidate. We define the degree of this suitability as centrality of segmented area in this paper (hereafter CSA), as expressed by the normalized value in Equation (3).
C S A f j , i = 1 p o s i t i o n f j , i C o A ( F j ) m a x f F j ,   p o s i t i o n f C o A ( F j )
where F j denotes F j dimensionally reduced to R 2 , and f denotes f dimensionally reduced to R 2 by iterating through F j .
Figure 7 shows the CSA computed by Equation (3) at a specific f j , i .

2.2.2. Density of Interest Points

The 3D interest points are characteristic positions extracted from the three-dimensional geometric information of the mesh (hereafter 3D interest point is referred to as interest point). There are various methods for extracting interest points [21,22,23], most of them extract partial features by analyzing the overall geometry of the mesh. This means that if interest points are extracted from a large object, various features may not be reflected from the perspective of the segmented area.
Therefore, Mesh Saliency, which computes the degree of saliency for every vertex, is used to extract interest points [21]. The threshold for extracting interest points is the mean value of the total saliency at all vertices, as in [24]. In addition, because it is a large object, the local maxima are not considered to derive the interest points as much as possible. Figure 8 shows interest points extracted after preprocessing car models, which are specific virtual large objects.
From Hypothesis 2, the more densely the interest points are around f j , i , the more likely it is to be a suitable candidate. We define the degree of this suitability as density of interest points in this paper (hereafter DoI), which is expressed as Equation (4).
D o I f j , i = p i n t e r e s t F j ,   1 ,   p o s i t i o n f j , i p < r 0 ,   p o s i t i o n f j , i p r
where i n t e r e s t F j denotes all the interest points in F j , p denotes a point iterating through these interest points, and r is the radius for determining whether the interest points are around f j , i (in this paper, twice the value of ε in the preprocessing is used).
As with the CSA represented by normalization, the DoI is also normalized and is expressed as in Equation (5).
n o r m a l i z e D o I f j , i = D o I f j , i m a x f F j ,   D o I f
where f denotes a f iterating through F j .
Figure 9 shows the normalized DoI computed by Equation (5) at a specific f j , i .

2.2.3. MPS Score

The MPS score, which indicates the suitability of a specific f j , i as a position for placing a marker, is expressed as Equation (6) using a weighted mean of both Equations (3) and (5) derived from each hypothesis.
M P S f j , i = 1 w · C S A f j , i + w · n o r m a l i z e D o I f j , i
where MPS denotes a function that calculates the MPS score at f j , i , and w denotes the weight used in the weighted mean.
Multiple markers are placed at f′s with the highest MPS score in each area. Figure 10 shows the MPS scores calculated according to the number of segments k and weight w on a specific virtual large object, the car model.
In Figure 10, green color indicates a high MPS score, and white color indicates a low MPS score. When no weights are considered (w = 0, second column in Figure 10a–c), it is shown that the closer to the CoA of each segmented area, the higher the MPS score, and the further away from the CoA, the lower the MPS score. On the other hand, when only the weights are considered (w = 1, fourth column in Figure 10a–c), the featured positions of each segmented region have higher MPS scores regardless of the CoA, and lower MPS scores otherwise. It is also shown that the number of segments (k) affects the MPS score because the segmented areas and CoAs are changed by k.

3. Experiment and Discussion

3.1. Experimental Environment and Methodology

In this paper, a preliminary experiment to determine a valid weight w (described in Section 3.2) and a main experiment to validate the proposed method (described in Section 3.3) were performed. In this section, we introduce the experimental environment and methodology used in both experiments.

3.1.1. Experimental Environment

Both experiments were conducted using Microsoft HoloLens 2, which is capable of spatial awareness of surroundings [25]. This device provides a more reliable camera pose estimation than popular AR devices, such as ARKit and ARCore [26]. In addition, this device is capable of robust pose estimation (for example, in [27], the mean error between a real and captured space when moving about 287 m in an indoor environment was 2.4 cm).
The experiments were conducted in an indoor parking lot, as shown in Figure 11, to ensure stable illumination. Because the illumination of this environment is determined by indoor lighting, it is relatively more stable than outdoor environments, which change in real time due to the sun, clouds, etc.
The experiments were conducted after sunset and before sunrise to maintain constant lighting. The civil twilight period was also excluded to minimize the influence of sunlight. The illuminance of the experimental location was measured between 100 and 200 lx during these times, and HoloLens′s ability to augment virtual objects does not degrade at these illuminations according to a previous study [28].
  • Real objects
In the experiments, cars, which are commonly seen in everyday life, were used as large objects. Two types of cars with different sizes were used as large objects, as shown in Figure 12: Kia-Morning (=Picanto) 3rd generation 1st facelift (hereafter DM) and Hyundai-Avante 6th generation early model (hereafter DA). The specifications for each real object are listed in Table 3.
2.
Virtual objects
We used models created by measuring actual cars, from the 3D model providers CGTrader [29] and Hum3D [30], as virtual objects for each real object. CGTrader is the world′s largest supplier of 3D models, and it has been used in many studies. In particular, Ref. [31] studied autonomous driving using a real car and a corresponding virtual object in the CGTrader. Hum3D is a company that aims to create realistic modeling for applications such as AR and visualization, and 3D experts examine whether the modeling results match actual objects. This company is primarily used in research on cars [32,33,34].
As virtual objects, 3D models identical to the car models (e.g., Kia-Picanto 3rd generation 1st facelift) of the real objects were used [35,36]. To verify the equivalence of the virtual models to the real objects, it was checked whether they met the tolerances in Article 115 “Tolerance of Specifications” of the “MOLIT Ordinance: Regulations for Performance and Safety Standards of Motor Vehicle and Vehicle Parts”.
The results showed that they all met the tolerances, as listed in Table 4 and Table 5 (note that Ref. [36] was created at a 1/2 scale, so the comparison was performed after scaling up to 2×). Except for the overall height, all the error rates were less than 1%. This is likely due to the influence of consumable components such as tires on this value.
Because they are virtual models, they may differ from real objects owing to options and consumables, as well as the geometric discrepancy problem. For this reason, consumable wipers, tires, and license plates were removed from the virtual models, and the issue of options is considered as the geometric discrepancy problem in this paper.

3.1.2. Methodology

  • Content for augmentation and measurement
To validate the proposed method through experiments, content that augmented the virtual object based on multiple markers was developed. The content was developed by using the 3D content development tool ‘Unity (2020.3.12f1 version)’ and the officially supported HoloLens 2 content development library ‘mixed reality tool ‘kit (2.7.3 version)’.
The developed content (a) augmented a virtual sphere at the index fingertip in real time through a pointing gesture with the right hand (HoloLens 2 has the highest hand tracking accuracy when the pointing gesture is maintained in the right hand [37,38]), and (b) placed a virtual marker at the position of the augmented sphere by controlling a wireless mouse with the left hand, as shown in Figure 13.
In a simple test using the real right hand and a physical marker, it was confirmed that when the virtual sphere augmented at the index fingertip was positioned at the end of the marker, an error occurred approximately 0.4 to 1.5 cm, as shown in Figure 14. This error is considered as the relative position consistency problem in this paper.
The virtual marker was then placed on the real object by referring to a marker position image recorded for the virtual object, as shown in Figure 15. When the placement of the markers was completed, the content verified whether all the markers were at the initial input positions. If any markers were different from the initial position, the content allowed to reposition them using the right index finger and mouse.
Once it was confirmed that all the markers were placed correctly, the developed content performed registration based on the iterative closest point (ICP) using the relationship between the placed markers and the markers positioned relative to the virtual object, and finally augmented the virtual object. Figure 16 shows the results of the augmented registration on the DM and DA.
The developed content added a World Anchor property to the virtual object when the augmentation and registration of the virtual object were completed. This property is a technology that maintains the position and rotation of virtual objects placed in the real world, even if camera pose estimation errors occur and accumulate in markerless environment [39,40,41,42]. HoloLens 2 uses the perceived surrounding space and sensors on the device to anchor a virtual object to the real world. Figure 17 shows the spatial awareness of the real object. Then errors, which are the registration performance of the augmented virtual object, were measured using this content.
2.
Method for measuring registration error
Before the experiments, the surrounding spaces of the real object were collected for approximately 30 min so that the HoloLens 2 could sufficiently recognize them. Figure 18 shows the process of collecting the surrounding spaces.
The reference points for measuring the error between the position of the real object and the position of the virtual object (hereafter, registration error) were selected at positions that were clearly recognizable to the naked eye, as shown in Figure 19.
Four reference points were selected, as shown in Figure 20a, and each reference point is described in Table 6. The registration error was measured by recording the difference between the position of the reference point on the real object and the reference point on the virtual object, when pointing with the right index fingertip at each reference point on the real object and controlling the mouse with the left hand. Figure 20b shows the process of measuring registration errors.

3.2. Preliminary Experiment

Before validating the performance of the proposed method, the preliminary experiment was conducted to determine the valid value of the weight. Its procedure is shown in Figure 21, where the number of markers k and weight w were considered as follows.
  • The number of markers k
Previous studies on marker-based AR mentioned that at least three markers are required to solve the perspective-n-point (PnP) problem for camera pose estimation [7,43]. The ICP algorithm, which is primarily used for registration with multiple markers, also requires at least three markers [44]. Meanwhile, previous studies on markerless AR, which is relatively free from the PnP problem, often used more than three markers for the initial registration [45,46,47]. Therefore, in consideration of previous studies, the number of markers was used in this experiment from 4 to 2x as many as 8.
2.
Weight w
The weights in this experiment were used with a total of 5 values, from 0 which excludes DoI to 1 which excludes CSA, split into quartiles:0, ¼, ½, ¾, and 1.
This experiment was conducted by changing k and w and measuring the registration errors between the real object and the augmented virtual object. This process was repeated twice.

Results of Preliminary Experiment

As a result of the preliminary experiment, the mean of the registration errors (hereafter MRE), which are measured through all reference points, was calculated in Equation (7), as shown in Figure 22.
M R E = 1 n p r p v
where n denotes the number of reference points, p r denotes the measured position of the reference point on the real object, and p v denotes the measured position of the reference point on the virtual object.
  • Impact of weights
It was assumed that the MRE would be measured the least when a certain weight was used, regardless of the number of markers. Accordingly, the analysis was performed using 5 weight groups with a total of 20 MRE: 2 (number of models) × 5 (number of markers) × 2 (number of iterations).
The Shapiro–Wilk test was performed to examine whether each weight group followed a normal distribution (with a 95% confidence interval), and the results are listed in Table 7.
As a result of this test, p-values for all groups were less than 0.05, indicating that the normal distribution was not valid. Therefore, the non-parametric Kruskal–Wallis test was performed to confirm significant differences between the groups (with a 95% confidence interval). This test confirmed a statistically significant difference between the weight groups (H = 35.772, p = 3.22 × 10−7 < 0.001). As listed in Table 8, it was confirmed that the MRE was the lowest when the weight was 0.75, and accordingly, the proposed method used w = 0.75 as the default value.
2.
Impact of the number of markers
Because many previous studies indicated that the registration error decreases as the number of markers increases, an additional analysis was performed to confirm whether it also holds true for w = 0.75. For this analysis, five groups based on the number of markers were used, each consisting of four MRE: 2 (number of models) × 2 (number of iterations). Owing to the small number of data, the non-parametric Kruskal–Wallis test was performed. As a result of this test, there was no statistically significant difference between the groups (H = 9.300, p = 0.054 > 0.05), but this is likely due to the small sample size of the MRE comprising each group.

3.3. Main Experiment

The main experiment was conducted to validate that the performance of the multiple markers determined by the proposed method with the weight 0.75 is more effective for registration than that of multiple markers placed by the conventional methods with human subjectivity. The experimental procedure is shown in Figure 23.
Participants for the subjective placement of multiple markers were randomly recruited from among males and females in their 20s who were enrolled in or graduated from the Korea University of Technology and Education. Recruitment was conducted until the total number of participants was at least five, the minimum number used for evaluation in the field of Human–Computer Interaction [48], and a total of 10 participants were recruited (male: 8, female: 2).
Before this experiment, each participant was introduced to the purpose of the experiment and the role of the markers in AR. Each participant then placed multiple markers using a subjective marker placement program (described in Section 3.3.1) and was rewarded to encourage participation. In this experiment, the number of markers used was the minimum (4), median (6), and maximum (8) of the number of markers which were used in the preliminary experiment.
An analysis to validate the performance was performed by comparing 60 MRE from the multiple markers set by the participants, with 60 MRE from multiple markers determined by the proposed method.

3.3.1. Subjective Marker Placement Program

As shown in Figure 24, the program, which was provided to the participants, had functions of manipulating (moving, rotating, and zooming) the visualized virtual models with the mouse and functions of placing markers where it was deemed suitable. To prevent participants from making mistakes in this process, the creation and removal of markers were simply performed using left- and right-clicks in this program. In the process of being guided before the experiment, the participants were also introduced to how to use the program.

3.3.2. Results of Main Experiment

The respective MRE, which were measured 10 times after placing the markers using both the conventional subjective method (hereafter M R E c o n v e n t i o n a l ) and the proposed method (hereafter M R E p r o p o s e d ), are shown in Figure 25, and it was confirmed that the proposed method has relatively less volatility.
  • Performance analysis for registration accuracy
An analysis was performed to confirm whether the proposed method significantly reduced the MRE compared with the conventional method. The total MRE, which included the models (DM and DA), number of markers, and number of iterations, was used to confirm the normal distribution of the MRE for each method. The results of the Shapiro–Wilk test for the total MRE for each method are listed in Table 9 (with a 95% confidence interval), and it was confirmed that the normal distribution was not valid.
Therefore, the non-parametric Mann–Whitney U test was performed to confirm significant differences between the two groups (with a 95% confidence interval). This test confirmed that the proposed method had a significantly lower MRE than the conventional method, regardless of the number of markers. Figure 26 shows the mean differences and significance intervals between the conventional and the proposed methods.
2.
Performance analysis according to the number of markers
To confirm whether the performance of the proposed method improved as the number of markers increased, as in previous studies, M R E p r o p o s e d according to k was analyzed. As a result of the Kruskal–Wallis test between groups separated by each k (with a 95% confidence interval), it was confirmed that statistically significantly more accurate registration is performed as k increases (H = 10.344, p = 0.006 < 0.01). Table 10 lists the results of this test, which are consistent with results of previous studies.
To confirm whether this result was equally applicable to all models, M R E p r o p o s e d was split into DM and DA, and this analysis was repeated (with a 95% confidence interval). The results are listed in Table 11. The MRE seemed to decrease as k increased, but there was no statistically significant difference in the DA (DM: H = 19.719, p = 5.23 × 10−5 < 0.001, DA: H = 1.866, p = 0.393 > 0.05).

3.4. Discussion

It was verified that, to determine the position of multiple markers, the proposed method performed better than the conventional method which is subjectively determined by humans.
The weight determined by the preliminary experiment (w = 0.75) means that the DoI is more effective than the CSA to minimize the inevitable problems that arise in AR. It was also confirmed that the registration error decreased as the proportion of the DoI increased. This means that the more interest points, the easier for humans or sensors to recognize. However, the registration error increased when the CSA was completely excluded (w = 1). This means that even a small proportion must be reflected to minimize the geometric discrepancy problem.
When the minimum number of markers (k = 4) was used, M R E p r o p o s e d was approximately 59.8% less than M R E c o n v e n t i o n a l , and the reduction in error increased as k increased. This means that the proposed method can be applied in various ways, from environments using a small number of multiple markers to environments using a large number of multiple markers.
In the performance comparison analysis of the proposed method using the number of markers, as in results of previous studies, the error decreased significantly as the number of markers increased. However, a significant reduction in the specific object DA was not observed. This means that the using only four markers in DA is no different than using six or eight, so it is not necessary to use many markers; and this result means that there may be an optimal number of markers depending on the geometry of a specific model.
Lastly, because the proposed method automatically determines the positions of multiple markers used for registration, it can be used as a comparison group in all subsequent and extended experiments to compare registration performance.

4. Conclusions

This paper proposed a multiple marker placement method to minimize the inevitable problems when placing markers in AR and validated its performance. The difference between the proposed and conventional methods is that, unlike the conventional method that randomly positions the markers, the proposed method automatically determines the marker positions by reflecting the geometric features of the virtual object.
The proposed method calculates the CSA and DoI within the segmented area by dividing the virtual object as evenly as possible and using the interest points. To consider all of these, the MPS score formula based on the weighted mean was derived, and the scores for all faces were calculated. The positions of multiple markers were determined at the center of the face with the highest MPS score in each segmented area.
In the proposed method, the weight for calculating the MPS score was determined to be 0.75 through the preliminary experiment. The registration error was measured after augmenting the virtual object by changing the weight ratio of the proposed method in the preliminary experiment. As a result of the preliminary experiment, it was confirmed that the CSA should not be completely excluded, but the DoI should be considered more.
The performance of the proposed method was validated by comparing it with the conventional method in which markers were subjectively placed. It was confirmed that the proposed method performs better than the conventional method as follows. (1) The proposed method can provide more consistently accurate registration than the conventional method. (2) Regardless of the number of markers, the proposed method provides more accurate registration.
This means that the proposed method can be applied in various ways, from environments using a small number of multiple markers to those using a large number of multiple markers, and it can become a baseline for placing multiple markers in research related to registration.
The advantage of the proposed method is that it can replace the subjective human judgment when positioning multiple markers. Since human subjective judgment is highly volatile depending on the surrounding conditions or environments, it may have limitations in consistency and accuracy. However, the proposed method can ensure consistent performance and accuracy as it determines the positions of multiple markers by considering the CSA and the DoI of the object.
The following studies are considered in future works. First, as the methods for extracting the interest points of virtual objects continue to evolve, studies are required to compare the performance of the proposed method using various interest point extraction methods.
Second, a study is needed to verify the effect of the two thresholds used in the proposed method on registration performance based on the MPS Score. The proposed method used threshold ε for upscaling in the preprocessing step, and threshold 2 ε for calculating the DoI in the scoring step. Since these two thresholds are involved in the calculation of the MPS score, the effect of the threshold values should be closely examined in future research.
Third, further research is needed to perform multifaceted sensitivity analyses by applying the proposed method to multiple large objects. Since the proposed method was carried out on two specific models with similar sizes, the set weight may be overfitting for vehicles. This implies that the parameters or formulas of the proposed method can be improved depending on the type or size. To compensate for this, it is necessary to apply the proposed method to multiple large objects of various types, not limited to vehicles, and to perform sensitivity analyses.
Fourth, further research should be conducted to compare registration based on the proposed method with registration based on markerless methods. The registration performance of the proposed method was confirmed by comparing it with only the subjectively conventional method. However, it was not compared to markerless-based registration methods.
Finally, the K-means++ clustering used in this paper for area segmentation complements the K-means clustering to minimize randomness, but it is not completely resolved. This means that, although the results from the proposed method are mostly similar, they do not always guarantee the same results. Therefore, it is necessary to improve the method for segmenting areas to ensure that the same results can always be derived.

Author Contributions

Conceptualization, Y.S.K.; methodology, Y.S.K. and S.Y.K.; software, S.Y.K.; validation, Y.S.K. and S.Y.K.; formal analysis, S.Y.K.; investigation, Y.S.K. and S.Y.K.; resources, S.Y.K.; data curation, S.Y.K.; writing—original draft preparation, S.Y.K.; writing—review and editing, Y.S.K.; visualization, S.Y.K.; supervision, Y.S.K.; project administration, Y.S.K.; funding acquisition, Y.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to Significant investment of time and money in data acquisition.

Acknowledgments

This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2023R1A2C2002838).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azuma, R.T. A Survey of Augmented Reality. Presence Teleoper. Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  2. Li, Y.-B.; Kang, S.-P.; Qiao, Z.-H.; Zhu, Q. Development actuality and application of registration technology in augmented reality. In Proceedings of the 2008 International Symposium on Computational Intelligence and Design, Wuhan, China, 17–18 October 2008; Volume 2, pp. 69–74. [Google Scholar]
  3. Wagner, D.; Schmalstieg, D. Artoolkitplus for pose tracking on mobile devices. In Proceedings of the 12th Computer Vision Winter Workshop, St. Lambrecht, Austria, 6–8 February 2007; pp. 136–146. [Google Scholar]
  4. Ha, T.; Woo, W.; Lee, J.; Ryu, J.; Choi, H.; Lee, K. ARtalet: Tangible user interface based immersive augmented reality authoring tool for Digilog book. In Proceedings of the 2010 International Symposium on Ubiquitous Virtual Reality, Gwangju, Republic of Korea, 7–10 July 2010; pp. 40–43. [Google Scholar]
  5. Boonbrahm, P.; Kaewrat, C.; Boonbrahm, S. Effective collaborative design of large virtual 3D model using multiple AR markers. Procedia Manuf. 2020, 42, 387–392. [Google Scholar] [CrossRef]
  6. Ruan, K.; Jeong, H. An augmented reality system using Qr code as marker in android smartphone. In Proceedings of the 2012 Spring Congress on Engineering and Technology, Xi’an, China, 27–30 May 2012; pp. 1–3. [Google Scholar]
  7. Tone, D.; Iwai, D.; Hiura, S.; Sato, K. Fibar: Embedding optical fibers in 3d printed objects for active markers in dynamic projection mapping. IEEE Trans. Vis. Comput. Graph. 2020, 26, 2030–2040. [Google Scholar] [CrossRef] [PubMed]
  8. Park, H.; Park, J.I. Invisible marker–based augmented reality. J. Hum.-Comput. Interact. 2010, 26, 829–848. [Google Scholar] [CrossRef]
  9. Ieiri, S.; Uemura, M.; Konishi, K.; Souzaki, R.; Nagao, Y.; Tsutsumi, N.; Taguchi, T. Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative CT image using optical tracking device. Pediatr. Surg. Int. 2012, 28, 341–346. [Google Scholar] [CrossRef]
  10. Blankemeyer, S.; Wiemann, R.; Posniak, L.; Pregizer, C.; Raatz, A. Intuitive robot programming using augmented reality. Procedia CIRP 2018, 76, 155–160. [Google Scholar] [CrossRef]
  11. Pepe, A.; Trotta, G.F.; Mohr-Ziak, P.; Gsaxner, C.; Wallner, J.; Bevilacqua, V.; Egger, J. A marker-less registration approach for mixed reality–aided maxillofacial surgery: A pilot evaluation. J. Digit. Imaging 2019, 32, 1008–1018. [Google Scholar] [CrossRef] [PubMed]
  12. Biun, J.; Dudhia, R.; Arora, H. The in-vitro accuracy of fiducial marker-based versus markerless registration of an intraoral scan with a cone-beam computed tomography scan in the presence of restoration artifact. Clin. Oral Implant. Res. 2023, 34, 1257–1266. [Google Scholar] [CrossRef] [PubMed]
  13. Lee, J.Y.; Seo, D.W.; Rhee, G.W. Tangible authoring of 3D virtual scenes in dynamic augmented reality environment. Comput. Ind. 2011, 62, 107–119. [Google Scholar] [CrossRef]
  14. Bruno, F.; Barbieri, L.; Marino, E.; Muzzupappa, M.; Colacino, B. A Handheld Mobile Augmented Reality Tool for On-Site Piping Assembly Inspection. In Proceedings of the International Conference on Design Tools and Methods in Industrial Engineering, Modena, Italy, 9–10 September 2019; pp. 129–139. [Google Scholar]
  15. Sheng, Y.T.; Liong, S.T.; Wang, S.Y.; Gan, Y.S. 3D printing on freeform surface: Real-time and accurate 3D dynamic dense surface reconstruction with HoloLens and displacement measurement sensors. Adv. Mech. Eng. 2023, 15, 16878132221148404. [Google Scholar] [CrossRef]
  16. Kalogerakis, E.; Hertzmann, A.; Singh, K. Learning 3D mesh segmentation and labeling. ACM Trans. Graph. 2010, 29, 102. [Google Scholar] [CrossRef]
  17. Bao, X.; Tong, W.; Chen, F. A Spectral Segmentation Method for Large Meshes. Commun. Math. Stat. 2023, 11, 583–607. [Google Scholar] [CrossRef]
  18. Hu, Z.; Bai, X.; Shang, J.; Zhang, R.; Dong, J.; Wang, X.; Sun, G.; Fu, H.; Tai, C.L. Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation of Indoor Scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 1–12. [Google Scholar] [CrossRef] [PubMed]
  19. Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  20. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 21 June-18 July 1965; pp. 281–297. [Google Scholar]
  21. Lee, C.H.; Varshney, A.; Jacobs, D.W. Mesh saliency. ACM Trans. Graph. 2005, 24, 659–666. [Google Scholar] [CrossRef]
  22. Sipiran, I.; Bustos, B. Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes. Vis. Comput. 2011, 27, 963–976. [Google Scholar] [CrossRef]
  23. Godil, A.; Wagan, A.I. Salient local 3D features for 3D shape retrieval. In Three-Dimensional Imaging, Interaction, and Measurement, Proceedings of the IS&T/SPIE Electronic Imaging, 2011, San Francisco, CA, USA, 23–27 January 2011; SPIE: Bellingham, WA, USA, 2011; Volume 7864, pp. 275–282. [Google Scholar]
  24. Dutagaci, H.; Cheung, C.P.; Godil, A. Evaluation of 3D interest point detection techniques via human-generated ground truth. Vis. Comput. 2012, 28, 901–917. [Google Scholar] [CrossRef]
  25. Microsoft. Spatial Mapping. Available online: https://learn.microsoft.com/ko-kr/windows/mixed-reality/design/spatial-mapping (accessed on 21 May 2023).
  26. Feigl, T.; Porada, A.; Steiner, S.; Löffler, C.; Mutschler, C.; Philippsen, M. Localization Limitations of ARCore, ARKit, and HoloLens in Dynamic Large-scale Industry Environments. In Proceedings of the 15th International Conference on Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020; pp. 307–318. [Google Scholar]
  27. Hübner, P.; Clintworth, K.; Liu, Q.; Weinmann, M.; Wursthorn, S. Evaluation of HoloLens tracking and depth sensing for indoor mapping applications. Sensors 2020, 20, 1021. [Google Scholar] [CrossRef]
  28. Blom, L. Impact of Light on Augmented Reality: Evaluating How Different Light Conditions Affect the Performance of Microsoft HoloLens 3D Applications. Bachelor Thesis, Linköping University, Linköping, Sweden, 2018. [Google Scholar]
  29. CGTrader. Available online: https://www.cgtrader.com (accessed on 21 May 2023).
  30. Hum3D. Available online: https://hum3d.com (accessed on 22 May 2023).
  31. Häuslschmid, R.; von Buelow, M.; Pfleging, B.; Butz, A. Supporting Trust in autonomous driving. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, Limassol, Cyprus, 13–16 March 2017; pp. 319–329. [Google Scholar]
  32. Cha, S.G.; Yoon, Y.J.; Lee, Y.J.; Hong, S.T.; Mun, H.S.; Park, Y.H. Integrated Module Antenna for Automotive UWB Application. Appl. Sci. 2022, 12, 11423. [Google Scholar] [CrossRef]
  33. Shadrin, S.S.; Makarova, D.A.; Ivanov, A.M.; Maklakov, N.A. Safety Assessment of Highly Automated Vehicles Using Digital Twin Technology. In Proceedings of the 2021 Intelligent Technologies and Electronic Devices in Vehicle and Road Transport Complex, Moscow, Russia, 11–12 November 2021; pp. 1–5. [Google Scholar]
  34. Laili, M.F.; Kassim, K.A.A.; Johari, M.H.; Ahmad, Y.; Yahya, W.J. Development of Tata Super Ace finite element model. J. Adv. Veh. Syst. 2021, 11, 13–31. [Google Scholar]
  35. Hum3D. Kia Picanto GT-Line 2022. Available online: https://hum3d.com/ko/3d-models/kia-picanto-gt-line-2020 (accessed on 22 May 2023).
  36. CGTrader. Hyundai Elantra 2017. Available online: https://www.cgtrader.com/3d-models/car/car/hyundai-elantra-2017-30afa623-02c3-4cf8-a171-8dbf3ac03a96 (accessed on 21 May 2023).
  37. Microsoft. Specification of HoloLens 2. Available online: https://www.microsoft.com/ko-kr/hololens/hardware (accessed on 1 December 2023).
  38. Soares, I.; Sousa, R.B.; Petry, M.; Moreira, A.P. Accuracy and repeatability tests on HoloLens 2 and HTC Vive. Multimodal Technol. Interact. 2021, 5, 47. [Google Scholar] [CrossRef]
  39. Jeong, C.S.; Kim, J.S.; Kim, D.K.; Kwon, S.C.; Jung, K.D. AR anchor system using mobile based 3D GNN detection. Int. J. Internet Broadcast. Commun. 2021, 13, 54–60. [Google Scholar]
  40. Neb, A.; Strieg, F. Generation of AR-enhanced assembly instructions based on assembly features. Procedia CIRP 2018, 72, 1118–1123. [Google Scholar] [CrossRef]
  41. Khan, F.A.; Rao, V.V.R.M.K.; Wu, D.; Arefin, M.S.; Phillips, N.; Swan, J.E. Measuring the perceived three-dimensional location of virtual objects in optical see-through augmented reality. In Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bari, Italy, 4–8 October 2021; pp. 109–117. [Google Scholar]
  42. Reyes-Aviles, F.; Fleck, P.; Schmalstieg, D.; Arth, C. Compact World Anchors: Registration Using Parametric Primitives as Scene Description. IEEE Trans. Vis. Comput. Graph. 2022, 29, 4140–4153. [Google Scholar] [CrossRef] [PubMed]
  43. Buń, P.; Górski, F.; Wichniarek, R.; Kuczko, W.; Zawadzki, P. Immersive educational simulation of medical ultrasound examination. Procedia Comput. Sci. 2015, 75, 186–194. [Google Scholar] [CrossRef]
  44. Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 5, 698–700. [Google Scholar] [CrossRef] [PubMed]
  45. de Almeida, A.G.C.; de Oliveira Santos, B.F.; Oliveira, J.L. A Neuronavigation System Using a Mobile Augmented Reality Solution. World Neurosurg. 2022, 167, e1261–e1267. [Google Scholar] [CrossRef] [PubMed]
  46. Cao, A.; Dhanaliwala, A.; Shi, J.; Gade, T.P.; Park, B.J. Image-based marker tracking and registration for intraoperative 3D image-guided interventions using augmented reality. In Proceedings of the Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, Houston, TX, USA, 15–20 February 2020; Volume 11318, p. 1131802. [Google Scholar]
  47. Kunz, C.; Maurer, P.; Kees, F.; Henrich, P.; Marzi, C.; Hlaváč, M.; Mathis-Ullrich, F. Infrared marker tracking with the HoloLens for neurosurgical interventions. Curr. Dir. Biomed. Eng. 2020, 6, 20200027. [Google Scholar] [CrossRef]
  48. Nielsen, J. Finding usability problems through heuristic evaluation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Monterey, CA, USA, 3–7 May 1992; pp. 373–380. [Google Scholar]
Figure 1. Prerequisite for attaching markers to the real object: (a) non-attachable; (b) attachable (=face).
Figure 1. Prerequisite for attaching markers to the real object: (a) non-attachable; (b) attachable (=face).
Applsci 14 00941 g001
Figure 2. Example of the upscaling process: (a) before; (b) after.
Figure 2. Example of the upscaling process: (a) before; (b) after.
Applsci 14 00941 g002
Figure 3. Example of inner and outer surface determination.
Figure 3. Example of inner and outer surface determination.
Applsci 14 00941 g003
Figure 4. Determination of how to create faces based on ROI boundaries: (a) intersecting the ROI boundary through the face; (b) new faces created by vertex conditions.
Figure 4. Determination of how to create faces based on ROI boundaries: (a) intersecting the ROI boundary through the face; (b) new faces created by vertex conditions.
Applsci 14 00941 g004
Figure 5. Example of segmenting a virtual object: (a) preprocessed source; (b) k = 4; (c) k = 6; (d) k = 8.
Figure 5. Example of segmenting a virtual object: (a) preprocessed source; (b) k = 4; (c) k = 6; (d) k = 8.
Applsci 14 00941 g005
Figure 6. The expected CoA and mean position in F j .
Figure 6. The expected CoA and mean position in F j .
Applsci 14 00941 g006
Figure 7. Example of CSA computation.
Figure 7. Example of CSA computation.
Applsci 14 00941 g007
Figure 8. Example of interest points extracted from virtual large objects: (a) A-segment car model; (b): C-segment car model; (c): D-segment car model.
Figure 8. Example of interest points extracted from virtual large objects: (a) A-segment car model; (b): C-segment car model; (c): D-segment car model.
Applsci 14 00941 g008
Figure 9. Example of DoI computation.
Figure 9. Example of DoI computation.
Applsci 14 00941 g009
Figure 10. Example of MPS score calculation according to k and w: (a) k = 4; (b) k = 6; (c) k = 8.
Figure 10. Example of MPS score calculation according to k and w: (a) k = 4; (b) k = 6; (c) k = 8.
Applsci 14 00941 g010
Figure 11. Indoor parking lot of the Future-Learning Center, Korea University of Technology and Education, where the experiments were conducted.
Figure 11. Indoor parking lot of the Future-Learning Center, Korea University of Technology and Education, where the experiments were conducted.
Applsci 14 00941 g011
Figure 12. Real objects used in the experiments: (a) D M R ; (b) D A R . where an object with the subscript R denotes a real object.
Figure 12. Real objects used in the experiments: (a) D M R ; (b) D A R . where an object with the subscript R denotes a real object.
Applsci 14 00941 g012
Figure 13. Marker placement process in the developed experimental content: (a) real-time tracking of the right index fingertip; (b) placing virtual marker when controlling the mouse with the left hand.
Figure 13. Marker placement process in the developed experimental content: (a) real-time tracking of the right index fingertip; (b) placing virtual marker when controlling the mouse with the left hand.
Applsci 14 00941 g013
Figure 14. Simple test result using a physical marker (unit: cm).
Figure 14. Simple test result using a physical marker (unit: cm).
Applsci 14 00941 g014
Figure 15. Marker placement process: (a) reference image; (b) marker placement.
Figure 15. Marker placement process: (a) reference image; (b) marker placement.
Applsci 14 00941 g015
Figure 16. Augmented registration results (k = 4, w = 0.5): (a) DM; (b) DA.
Figure 16. Augmented registration results (k = 4, w = 0.5): (a) DM; (b) DA.
Applsci 14 00941 g016
Figure 17. Spatial awareness of real world objects: (a) DM; (b) DA.
Figure 17. Spatial awareness of real world objects: (a) DM; (b) DA.
Applsci 14 00941 g017
Figure 18. Surrounding space collection for spatial awareness (DA).
Figure 18. Surrounding space collection for spatial awareness (DA).
Applsci 14 00941 g018
Figure 19. Examples of reference points (clearly recognizable positions, red circles) for measuring the error between the positions of virtual and real objects (DA).
Figure 19. Examples of reference points (clearly recognizable positions, red circles) for measuring the error between the positions of virtual and real objects (DA).
Applsci 14 00941 g019
Figure 20. Method for measuring registration errors: (a) reference points; (b) measurement process.
Figure 20. Method for measuring registration errors: (a) reference points; (b) measurement process.
Applsci 14 00941 g020
Figure 21. Preliminary experimental procedure for determining valid weight.
Figure 21. Preliminary experimental procedure for determining valid weight.
Applsci 14 00941 g021
Figure 22. Visualized results of the preliminary experiment.
Figure 22. Visualized results of the preliminary experiment.
Applsci 14 00941 g022
Figure 23. Main experimental procedure for validating performance of the proposed method.
Figure 23. Main experimental procedure for validating performance of the proposed method.
Applsci 14 00941 g023
Figure 24. Creating markers using the subjective marker placement program.
Figure 24. Creating markers using the subjective marker placement program.
Applsci 14 00941 g024
Figure 25. Visualized main experimental results: box whiskers (outliers are excluded from this figure).
Figure 25. Visualized main experimental results: box whiskers (outliers are excluded from this figure).
Applsci 14 00941 g025
Figure 26. Difference in MRE between the proposed and conventional methods.
Figure 26. Difference in MRE between the proposed and conventional methods.
Applsci 14 00941 g026
Table 1. Mean knee heights of males and females in their 20s–50s (unit: mm).
Table 1. Mean knee heights of males and females in their 20s–50s (unit: mm).
AgeMaleFemale
20s442.662408.077
30s438.340397.864
40s432.436397.931
50s423.298392.718
Table 2. Mean eye heights of males and females in their 20s–50s (unit: mm).
Table 2. Mean eye heights of males and females in their 20s–50s (unit: mm).
AgeMaleFemale
20s1615.1471489.162
30s1595.3161464.283
40s1571.0641451.561
50s1547.2581434.260
Table 3. Specifications of real objects (unit: mm).
Table 3. Specifications of real objects (unit: mm).
Type D M R D A R
Overall length35954570
Overall width15951800
Overall height14851440
Table 4. Comparison of D M R and D M V .
Table 4. Comparison of D M R and D M V .
DMDimensions
WidthHeightLength
Real (mm)159514853595
Virtual [35]1596.991511.193606.88
Error (mm)1.9926.1911.88
Tolerance (mm)±30±50±40
Error rate (%)0.1251.7640.330
where an object with the subscript V denotes a virtual object.
Table 5. Comparison of D A R and D A V .
Table 5. Comparison of D A R and D A V .
DADimensions
WidthHeightLength
Real (mm)180014404570
Virtual [36] (2×)1803.891456.394558.64
Error (mm)3.8916.3911.36
Tolerance (mm)±40±60±50
Error rate (%)0.2161.1380.249
Table 6. Detailed description of reference points used to measure registration errors.
Table 6. Detailed description of reference points used to measure registration errors.
PointDescription
aThe top end of the panel where the door connects to the fender (front, left).
bThe top end of the panel where the door connects to the fender (front, right).
cThe top end of the panel where the door connects to the fender (back, left).
dThe top end of the panel where the door connects to the fender (back, right).
Table 7. Shapiro–Wilk test results for the five weight groups.
Table 7. Shapiro–Wilk test results for the five weight groups.
wStaticsp-Value
00.7960.001
0.250.8300.003
0.50.6377.28 × 10−6
0.750.8070.001
10.8380.003
Table 8. Means and standard deviations of the five weight groups.
Table 8. Means and standard deviations of the five weight groups.
wM (mm)STD (mm)
03.2710.978
0.252.9591.065
0.51.9500.578
0.751.6830.331
12.1300.475
The gray row indicates the lowest MRE (M/STD) when the w is 0.75.
Table 9. Shapiro–Wilk test results for the total MRE.
Table 9. Shapiro–Wilk test results for the total MRE.
MethodStaticsp-Value
conventional0.8626.95 × 10−6
proposed0.4702.16 × 10−13
Table 10. Overall comparison between M R E p r o p o s e d by number of markers k.
Table 10. Overall comparison between M R E p r o p o s e d by number of markers k.
kM (mm)STD (mm)
42.1840.662
61.7700.298
81.5810.328
Table 11. Individual comparisons between M R E p r o p o s e d by number of markers k (mean ± standard deviation).
Table 11. Individual comparisons between M R E p r o p o s e d by number of markers k (mean ± standard deviation).
kDM (p = 5.23 × 10−5)DA (p = 0.393)
42.802 ± 0.2921.566 ± 0.165
61.966 ± 0.2581.573 ± 0.185
81.691 ± 0.3941.471 ± 0.189
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.Y.; Kim, Y.S. A Novel Method Using 3D Interest Points to Place Markers on a Large Object in Augmented Reality. Appl. Sci. 2024, 14, 941. https://doi.org/10.3390/app14020941

AMA Style

Kim SY, Kim YS. A Novel Method Using 3D Interest Points to Place Markers on a Large Object in Augmented Reality. Applied Sciences. 2024; 14(2):941. https://doi.org/10.3390/app14020941

Chicago/Turabian Style

Kim, Su Young, and Yoon Sang Kim. 2024. "A Novel Method Using 3D Interest Points to Place Markers on a Large Object in Augmented Reality" Applied Sciences 14, no. 2: 941. https://doi.org/10.3390/app14020941

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop