Next Article in Journal
Teacher–Student Model Using Grounding DINO and You Only Look Once for Multi-Sensor-Based Object Detection
Previous Article in Journal
Alginate Silver Nanoparticles and Their Effect on Sperm Parameters of the Domestic Rabbit
Previous Article in Special Issue
Impact of Navigation Aid and Spatial Ability Skills on Wayfinding Performance and Workload in Indoor-Outdoor Campus Navigation: Challenges and Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motion Sickness in Mixed-Reality Situational Awareness System

1
iCV Research Lab, University of Tartu, 51009 Tartu, Estonia
2
Vegvisir, 11312 Tallinn, Estonia
3
PwC Advisory, 00180 Helsinki, Finland
4
Institute of Higher Education, Yildiz Technical University, Besiktas, Istanbul 34349, Turkey
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2231; https://doi.org/10.3390/app14062231
Submission received: 17 October 2023 / Revised: 17 January 2024 / Accepted: 29 January 2024 / Published: 7 March 2024

Abstract

:
This research focuses on enhancing the user experience within a Mixed-Reality Situational Awareness System (MRSAS). The study employed the Simulator Sickness Questionnaire (SSQ) in order to gauge and quantify the user experience and to compare the effects of changes to the system. As the results of SSQ are very dependant on inherent motion sickness susceptibility, the Motion Sickness Susceptibility Questionnaire (MSQ) was used to normalize the results. The experimental conditions were tested on a simulated setup which was also compared to its real-life counterpart. This simulated setup was adjusted to best match the conditions found in the real system by using post-processing effects. The test subjects in this research primarily consisted of 17–28 years old university students representing both male and female genders as well as a secondary set with a larger age range but predominantly male. In total, there were 41 unique test subjects in this study. The parameters that were analyzed in this study were the Field of View (FoV) of the headset, the effects of peripheral and general blurring, camera distortions, camera white balance and users adaptability to VR over time. All of the results are presented as the average of multiple user results and as scaled by user MSQ. The findings suggest that SSQ scores increase rapidly in the first 10–20 min of testing and level off at around 40–50 min. Repeated exposure to VR reduces MS buildup, and a FoV of 49–54 is ideal for a MRSAS setup. Additionally camera based effects like lens distortion and automatic white balance had negligible effests on MS. In this study a new MSQ based SSQ normalization technique was also developed and utilized for comparison. While the experiments in this research were primarily conducted with the goal of improving the physical Vegvisir system, the results themselves may be applicable for a broader array of VR/MR awareness systems and can help improve the UX of future applications.

1. Introduction

Motion sickness is an uncomfortable phenomenon experienced by individuals in response to Virtual Reality (VR) environments or motion stimuli. This phenomenon necessitates reliable measurement methods to assess its impact. There are ways to accurately measure motion sickness in users. A very objective method is to measure body sway amplitude, blood pressure, body temperature, sweating, change of skin conductivity, heart rate, EGG, and EEG, as these can indicate motion sickness [1]. These measurement methods are great for obtaining very accurate Motion Sickness (MS) readings for a single participant but the setup is not very portable and may not provide accurate results on in-field setups. As an alternative method, subjective user-based questionnaire scores have widely been used and been proven to be reliable [2]. Two significant instruments employed in this context are the Simulator Sickness Questionnaire (SSQ) and the Motion Sickness Susceptibility Questionnaire (MSQ) [3].
A large contributor to MS is latency in VR, and how it breaks immersion [4]. There are multiple types of latencies that can be explored in a MRSAS system, two of the main ones being glass-to-glass latency and motion-to-picture latency. The former describes the latency between a change in the camera input being reflected on the VR Head-Mounted Display (HMD). The latter describes the latency between motion of the user and this being reflected in the HMD. Motion-to-picture latency and its effects on MS have been widely explored and is the primary consideration for most VR applications; as such, all additions made to a VR system in the hopes of reducing MS should be weighed against their computational cost and, by extension, their effects on latency [5].

1.1. Mixed-Reality Situational Awareness System

The term Mixed-Reality Situational Awareness System refers to a 360 degree real time camera pass-through system for a VR device. The functionality is similar to direct camera feed to HMD systems like FPV drone HMDs. The primary difference is the addition of head tracking on the HMD coupled with 360 degree camera feed projection, mimicking a traditional VR experience.

1.2. Simulator Sickness Questionnaire

The SSQ is a survey-based tool developed by Kennedy et al. [3] to quantify the severity of motion sickness symptoms experienced by participants in response to VR stimuli or simulators. Comprising 16 items, the questionnaire prompts respondents to rate their symptoms on a scale ranging from 0 to 3, reflecting the intensity of their discomfort. The items cover various aspects of motion sickness, including nausea, disorientation, and general discomfort. The SSQ effectively gauges the extent of an individual’s motion sickness experience, allowing for a comprehensive assessment of their susceptibility to this phenomenon. During this study, only the overall SSQ score was used to evaluate different conditions.
The literature references a rapid evaluation technique for the Simulator Sickness Questionnaire SSQ, termed the verbal Fast Motion Sickness (FMS) score. Although evidence suggests a correlation between FMS and SSQ, the FMS method was not adopted in these particular tests due to concerns about the method’s precision [6].

1.3. Motion Sickness Susceptibility Questionnaire

The MSQ was proposed by Frank et al. [7] and operates as a survey-based measure aimed at evaluating an individual’s susceptibility to motion sickness. By employing questions and response scales tailored to assessing one’s predisposition to motion sickness, the MSQ serves as a pre-screening tool. The questionnaire delves into factors such as age, gender, and prior VR experience, aiming to identify potential contributors to motion sickness susceptibility. A higher MSQ score typically suggests a higher likelihood of experiencing motion sickness when exposed to VR environments or stimuli.
In this study, MSQ was used as a way to counteract the effects of inherent motion sickness susceptibility on different users when evaluating SSQ scores. Both questionnaire have been included in the Supplementary Materials.

2. Related Work

The field of VR has undergone significant development, transitioning from its initial phase of exploration to a rapidly growing domain with diverse applications in areas such as entertainment, education, and medical rehabilitation. Despite the promising potential of VR, a significant obstacle to its widespread adoption is the occurrence of motion sickness caused by virtual environments [8,9,10,11,12,13,14,15]. The purpose of this Section is to provide a comprehensive analysis of the causes, measurement techniques, and potential solutions for motion sickness induced by VR. This will be achieved by reviewing and analyzing the current body of literature on the topic.
Full 360-degree VR videos and the time consistency of stitching the scene has been shown to cause MS [16]. Several methods have already been proposed to alleviate MS in these cases but not all of them have been thoroughly researched. Patrao et al. [2] suggest a VR environment should be treated as a theater and not a film; this means camera animations and tricks like involuntary zooming and a lack of a reference can have a massive impact on comfort in VR. A virtual body, on the other hand, has been shown to have a negative impact on motion sickness [17]. Serrano et al. [18] proposed motion parallax to help with MS, but this may not counteract the MS caused by frame delay due to its added computational cost.
The article by Chang et al. [4] provides a significant and influential contribution by presenting a thorough classification system that categorizes the origins of motion sickness induced by VR. The taxonomy identifies three primary domains that contribute to VR-induced motion sickness: hardware-related factors, content-related attributes, and individual-specific human variables. Hardware factors encompass various technical specifications, including display technology, Field of View (FoV) range, latency, and flicker rate. From a content standpoint, various complex factors, such as the dynamics of optical flow, the fidelity of graphical rendering, and the contextually optimized dimensions of the FoV, are identified as crucial precursors. The field of human factors encompasses demographic factors, such as age and gender, as well as experiential factors, with a particular focus on an individual’s inherent vulnerability to motion sickness.
Kemeny emphasizes the importance of reducing the discrepancy between visual and vestibular cues as a crucial approach to alleviate motion sickness [19]. The importance of controlling translational and rotational velocities, adopting HMDs with wide fields of view, and avoiding obstructions in the central visual field is supported by the empirical evidence presented in this study.
In their research work, Benz et al. [20] provide significant contributions to the field. Their argument suggests that the use of windscreen projection, which offers a wider field of view both horizontally (40 degrees) and vertically (35 degrees), leads to a reduced likelihood of causing simulator sickness when compared to traditional HMDs.
Modifications to the design of virtual reality content in order to address the issue of motion sickness were proposed by Patrao et al. [2]. The advised methodology promotes the conceptualization of VR experiences in a manner similar to theatrical productions. This involves incorporating static frame references and avoiding involuntary camera movements. These measures collectively contribute to the enhancement of user immersion and the reduction of discomfort.
The study by Cho et al. [21] presents a unique approach that utilizes a distortion technique to align user expectations with the motion experienced in a vehicle, resulting in a decrease in motion sickness.
Lim et al. [22] present an innovative approach that involves modulating the field of view in order to effectively alleviate symptoms of motion sickness. Nevertheless, the research articulates a word of warning regarding the possible consequences of imposing excessive limitations on the field of view, which could negatively impact the perception of presence and lead to a greater dispersion of attentional resources.
Jasper et al. [23] examine the disparities related to gender in terms of susceptibility to and recovery from motion sickness. The results highlight a higher vulnerability but faster recuperation in females compared to males.
Researchers such as Tovsic et al. [5] and Yoon et al. [24] investigate different aspects related to VR. The former examines the influence of latency on simulator sickness in smartphone VR, while the latter explores the effects of extended usage of VR through a smartphone-based HMD on visual parameters. Tovsic et al. find that there is no MS difference between latencies of 18 and 28 ms and Yoon et al. conclude that after 2 h of HMD usage, stereopsis and exophoric deviation significantly worsened.
Lim and Lee present findings that shed light on the intricate relationship between degrees of freedom (DOF) and FoV within the context of virtual reality-induced cybersickness [25]. This study compares the effects of using 3-DOF and 6-DOF HMDs and finds that the latter is associated with a lower likelihood of experiencing cybersickness. The decrease in discrepancy is ascribed to the increased alignment between the visual stimuli and the vestibular information, which is facilitated by the 6-DOF setups. The expansion of the field of view in HMDs is associated with a reduction in instances of cybersickness. This can be attributed to an increased feeling of immersion and a decrease in sensory conflicts.
A recent scholarly publication authored by Rolos and Merchant presents a methodical examination of the distinct vulnerability patterns observed in VR and MR HMDs [26]. It is postulated that the decrease in cybersickness induced by MR can be attributed to the translucent characteristics of MR HMDs, which allow for partial visibility of the real world and facilitate visual-vestibular interactions that enhance congruence.
The study conducted by Goedicke et al. [27] in 2022 presents an innovative approach to a mixed-reality driving simulation known as XR-OOM. The XR-OOM framework has the potential to reduce simulator sickness by integrating virtual elements into real-world driving scenarios using HMDs. This is achieved by ensuring that the visual stimuli presented in the virtual environment align with the vestibular cues experienced by the user.
In their research work, McGill et al. [28] explore the potential of AR/VR/MR technologies in improving passenger comfort during transportation. Through the synchronization of virtual augmentations with real-world surroundings, immersive technologies present a promising approach to mitigate motion-induced distress by harmonizing perceived and actual motion.

3. Study Overview

3.1. Method and Experiment Design

A selection of six different experimental setups were chosen based on parameters that could be changed in a physical VR HMD setup. For all experimental setups, the subjects were assigned the order conditions or changes based on a random distribution. For setups where users were to test only a subset of conditions like the FoV test, users were assigned FoV values based on a uniform distribution.
The experiments were divided into short and long tests. The shorter tests made up the majority of the testing while the longer tests served as a way to properly compare the simulation results to real life results. The shorter SSQ evaluations were conducted at 0, 5, 10, and 15 min. The longer tests were conducted at 0, 5, 10, 20, 30, 45, and 60 min, or when a participant was no longer able to continue being in VR. All new test subjects were administered a MSQ test as these were later used to normalize the results.
Some experiments changed the base nature of the application and may have introduced additional frame delays. Because of this, the experiments that added computationally impacting changes in the simulation were evaluated against additional control tests.
All alterations to the user experience were conducted separately. The visual alterations were not combined in any way so confounding variables were not tested in the frame of this study.

3.1.1. Adaptation

In order to confirm if prior exposure and training on a VR HMD is necessary, users’ adaptability to VR had to be validated. The prior literature suggested that users become accustomed to VR over multiple exposure sessions, but the impact of this in relation to other effects has remained unmeasured.
We set up an experiment where we tested users three times on the same system, once with a 1 h and 15 min break between the two sessions and the second time after a several-week-long break. These two different break intervals allowed us to compare the immediate effects of VR exposure and long lasting exposure.
A smaller sample set of users were also tested prior to the repeated tests as that was their first exposure to VR. Their initial results have been marked as “Pre test”.

3.1.2. Field of View

Keshavarz et al. [29] showed that a reduced FoV of 32 deg and a reduced visual display angle both contributed to a better motion sickness score. This has also been backed by Zielasko et al. [30] and Bala et al. [31], where their results indicated a FoV reduction had a minor reducing effect on MS. All of the aforementioned sources only tested either very limited FoV ranges or only a couple of selected FoV values.
We set up our experimental scenario with the same conditions, but with nine different FoV settings. The FoV settings tested were {26, 30, 34, 44, 49, 54, 60, 63, 84} degrees. All of the FoV tests were conducted by generating a flat plane in front of the user with a transparent hole in the center with blurred edges; different FoVs were achieved by scaling this plane. This ensured there were no additional distortions caused when the FoV was changed for each of the tests.
Out of the 9 possible FoV settings, 3–4 were chosen at random for each participant.
As the nature of this application comprises a camera, a virtual camera and a HMD, it is important to distinguish between three different types of FoV present in the system. The FoV of the cameras that are projected into the system, the FoV of the virtual camera that is fed into the VR HMD, and the inherent FoV of the headset. A visual representation can be seen in Figure 1. The last link in this pipeline, the FoV of the headset, was the one being measured. The changing the FoV of the virtual camera has already been thoroughly researched [25,32]. In case of the simulated testing setup, the first camera in the pipeline is a simulated camera. Apart from this difference, the two systems were designed to be as similar as possible.

3.1.3. Blur

The prior literature has suggested that the blurring of the salient area for VR HMD systems increases MS [33]. We wanted to verify the severity of this effect in comparison to other MS-inducing factors as in-field VR equipment may be subjected to less-than-ideal conditions and the lenses may become smudged or blurred as a result.
In order to verify the apparent effect of general blurring on a VR HMD, we set up an experimental scenario where the VR camera had a blurring post-processing (PP) filter applied to it. This PP effect did not visually impact the performance but as a precaution, a secondary test was conducted as a control where the PP effect was still applied but with the blur size being 0. Participants were randomly given one of the two effects to test out and were then asked to test the second setting at a later date.

3.1.4. Vignette

Non-salient area blurring has been shown to reduce MS and test track performance [34]. Non-salient area blurring could not be applied directly to a real camera feed in this application as the added complexity would greatly influence glass-to-glass latency. As such, the effects were applied to the periphery of the VR view, causing what is known as a vignette effect.
An additional test was proposed and conducted after seeing the findings from blurred and FoV tests. The idea was to test blur as a means of reducing FoV artificially while maintaining some semblance of peripheral vision. For this, a new setup was created where the periphery of the VR view was blurred to match the best-performing reduced FoV values. The chosen FoV were 26 deg and 54 deg as they stood out in the raw FoV test data. As this new rendering pipeline introduced additional input delay, new control tests were conducted with the same size black vignettes as the blurred periphery.
For both black and blurred cases for each FoV setting, the same users were tested and the first vignette method was chosen randomly for each user. Additionally, the users were asked to name their preferred vignette for each FoV setting separately.

3.1.5. Distortion

The real use case system uses multiple camera inputs with varying optics to generate the VR display. These cameras’ systems come with inherent but subtle image distortions. The effect of these distortions on a VR camera feed are largely unknown and their influence needs to be measured.
The simulation was set up so that each of the cameras capturing feed into the VR view had subtle artificial distortion applied to each of them. The distortions that were tested were common barrel and pincushion distortions as they are the most common and well-known types in image processing applications. The distortion effects were slightly over-exaggerated as compared to the in-field setup. Both distortions were applied at 6 % distortion strength in either direction. This was subtle enough to be barely perceivable without directly comparing the two cases and stronger then the distortion observed in the Vegvisir cameras.
The tests were conducted on the same people but they were randomly chosen to have one of the distortions applied to them first and the second one on a later date. As these distortions did not change the render pipeline and thus did not impact performance, completely separate control tests were not needed.

3.1.6. Illumination

Comparing and stitching multiple camera feeds from outdoor conditions highlights the issues with frame exposure and different white balance settings. This was very apparent in field tests and the significance of it on the user had to be verified. Kala et al. [35] has mentioned that decreased brightness on a camera feed has a reducing effect on MS but a high variance on brightness between frames has not been deeply researched.
The simulation was set up to allow for automatic white balance along with different camera illumination settings for each camera separately. The illumination variations between the cameras were greatly over exaggerated as compared to in field setups as the real setup is highly dependent on outside lighting and weather conditions. A secondary control setup was also tested where only automatic white balance was enabled on the simulated cameras. Users were randomly given either the manually exaggerated exposure setting or the default automatic white balance setup. They were then later tested on the second version on a later date.

3.2. In Field Setup

The test setup consisted of two separate systems. Namely, the Vegvisir system that used real pass-through live feed cameras and a simulation setup that was deployed on an HTC Vive with simulated camera feeds. A flowchart of how the system operates can be seen in Figure 2. The former was only tested in the field and is referenced in this text as “in field” setup. The latter was a stationary setup and is referenced as the “in simulation” setup.
In-field tests were conducted on different race tracks, but the test setup and evaluation procedure was the same throughout the tests. The majority of in field tests were conducted as 15 min runs as the longer tests were primarily used to confirm that the simulation and in field systems behave similarly in terms of MS. A more thorough description of the test lengths can be found in the results chapter. A rendering what was displayed for the user can be seen in Figure 3.
For the majority of recorded tests, the test subject was seated at the passenger seat with their view facing the direction of the vehicle. For four cases, the test subject was instead in the drivers seat. Their recorded results were indistinguishable from the other test results.

3.3. VR Simulation Setup

As a proof of concept for the system, a VR simulation in Unity had been developed beforehand. This simulation proved to be ideal for testing different MS-related effects in a controlled environment.
The simulated setup was based around a VR scene of a small town and surrounding fields. The materials of the assets, scene lighting and skybox were adjusted to appear more natural. Most of the similarity was achieved using post-processing effects like depth of field, color balance, bloom, film grain, and others on each of the virtual cameras to replicate the camera feed from the real-life counterpart. Post -rocessing was also used separately on the VR headset camera to replicate the conditions of the Vegvisir system, but these were minor and primarily focused on adjusting bloom and the FoV of the headset.
The VR HMD for the simulation was the HTC Vive Pro Eye instead of the Vegvisir HMD used in the real-life setup. This choice was made as the Vive had a larger viewing angle, allowing tests to exceed the technical specifications of the original Vegvisir system. The simulation was improved based on field test data to achieve as close to real life visual clarity as the final system. The view from the Vive headset was also adjusted in simulation with a blocking vignette to fully mimic the viewing angle of the Vegvisir HMD. The in-simulation camera was adjusted with post-processing effects to adjust the color balance, depth of field, bloom, and grain to mimic the real-life counterpart. A sample of focal depth for the simulated system as well as the real-life counterpart can be seen in Figure 4.
During the tests, the subjects remained seated and they were allowed to move the vehicle using keyboard inputs.

Hardware

For the simulation setup, a HTC Vive Pro was used with a computer comprising of an Nvidia RTX 3060 GPU and an AMD Ryzen 5 5600X CPU. This allowed the simulated setup to closely mimic the real life setup in terms of visual fidelity and motion-to-photo latency. The reported viewing angle for the Vive headset is 107 deg horizontally and the manually recorded FoV was 81 deg.
The Vive HMD and Vegvisir HMD were both manually measured to have a very similar motion-to-photo latency in default conditions. With the blurred vignette setup, the Vive was tested to have a 60 ms higher latency, so all comparison tests with the black vignette were redone on the slower setting.

4. Participants

The simulation-side experiment involved 25 individuals. Due to the substantial length of each test and repeated testing on the same individuals to obtain comparable results, it was decided that the small sample size was acceptable. The sample size was also limited due to the specialized nature of tests, which are both time-intensive and physically demanding for participants. Considering the potential discomfort and the need for close monitoring of each participant’s well-being, limiting the number of individuals involved was important. The subjects were thirteen females and twelve males—all students and employees of the University of Tartu. The ages of the participants ranged from seventeen to twenty-eight, with no significant prior VR experience. This particular study group was selected because individuals within this age category are the most frequent users of VR. Participants tried different VR parameters and had different numbers of test sessions.
The in-field tests were conducted on 26 test subjects, of which 10 were university students. The participants consisted of three females and twelve males, and the overall distribution was nine females and seventeen males including the university students. The ages of the in-field subjects ranged from 19 to over 60 years. Most of the longer in-field tests were conducted on the overlapping simulation participants, as this made it easier to compare the results of the two systems.
Among the participants, 16 participated in only a single test, 11 of these came from the field tests. The rest of the participants participated, on average, in 5.5 tests. All participant were informed ahead of time about the nature of the experiment and the potential discomfort caused by VR-induced motion sickness. Participants were also allowed to stop the test at any time if they experienced discomfort.

5. Results

5.1. SSQ Normalization Using MSQ

Motion sickness is a variable and often perplexing phenomenon in VR experiences. The individual susceptibility to VR-induced motion sickness can greatly differ among users. The literature has pointed toward a potential solution—a weak correlation between pre-test MSQ scores and subsequent SSQ scores [17]. This revelation opens the possibility of using MSQ scores to normalize SSQ values and establish a more uniform measure of motion sickness severity.
To delve into this correlation, our study initially compiled a wide spectrum of SSQ data, reflecting the diverse reactions people exhibit. Employing linear regression on each participant’s SSQ scores, we quantified the spread of the generated lines to derive a baseline score. To bridge the MSQ-SSQ relationship, a series of scaling functions were introduced as F ( x , m , s ) . Here, x signified the raw SSQ input to be scaled, m denoted the corresponding MSQ score for that specific participant, and s was a parameter introduced to optimize the function. Manipulating the s variable and the fundamental function F ( x , m , s ) led to the identification of the best fit.
The most effective function, derived from this analysis of the raw data, can be expressed as: F ( x , m , s ) = x / ( 1 + m · s ) .
The best s value on all of the data was found to be 1.50, but this varied slightly as newer tests were recorded. Sample best fitting lines can be seen in Figure 5. The first plot depicts the error of the application of the function F ( x , m , s ) on all recorded data over varying values of s. The x-axis depicts varying values of s and the y-axis depicts the standard deviation for the data at each time step after scaling. The four lines on the plot show where the smallest standard deviation was present for 5, 10, 15, and 60 min after scaling.
This was a form of focused fitting where the standard deviation of values near the X amount of minutes was used as the fitting error with a varying MSQ coefficient. Each of the lines represents all user data scaled by different MSQ coefficient values (x-axis) and then linearly interpolated to obtain the value at X minutes; all of these values were aggregated and the standard deviation was used as the fit error (y-axis). The values 5, 10, and 15 were chosen as they were the most frequent SSQ sample point values in the dataset and thus the most reliable. The 60 min represents our main target duration but is mostly interpolated.
The function F ( x , m , s ) was generated in a similar manner. Several functions including linear, polynomial, exponential, and combinations of these were tested. The proposed function of F ( x , m , s ) = x / ( 1 + m · s ) outperformed the the rest as more complex functions tended to over-fit on a specific time range very quickly and changed drastically when new test results were added for evaluation.
In the context of long-term tests, a distinct pattern emerged. Figure 6 only contains recorded data that exceeded 15 min in test time as this subset proved to be more stable for SSQ curve prediction.
Fluctuations in the best fit suggested an insufficiency of data, hinting at the challenges of conducting longer-duration tests. Here, the coefficient value ranged from 3.45 to 5.40 , with coefficients exceeding 1.00 yielding more stable MSQ scaling results compared to those based on data from all experiment durations. This phenomenon can be attributed to the dominance of 15 min samples, which seemingly lacked sufficient time to allow motion sickness effects to stabilize.
The best s value on the long data was 4.95 . Both the initial 1.50 and 4.95 were tested for data normalization and the value 4.95 was found to have a smaller standard deviation on the unaltered data set compared to 1.50 . As such, the coefficient of 4.95 was used for MSQ scaling, but this remains to be fully explored.
Exponential and logarithmic functions were also explored but a linear function yielded the most consistent results when the data set was updated. This inconsistency of the logarithmic function was due to the sporadic nature of the SSQ results, causing the function to often fail completely and default to a 0 slope line when there were not enough user samples.
This scaling mechanism was subsequently applied to all SSQ evaluations, thereby creating a theoretically more comparable set of raw values. However, the practical application of MSQ-scaled results proved to be nuanced. While this approach successfully compensated for some outliers, it encountered limitations in cases where subjects either overestimated their susceptibility to motion sickness or lacked prior experience. Nevertheless, the application of MSQ scaling has been included in later analysis as it better addresses outliers who have either very low or very high inherent susceptibility to MS.
To account for inter-individual MSQ variations, a secondary scaling based on the median MSQ was introduced. Using the observed scaling coefficient of 4.95 and targeting a median MSQ value of 5.0 , this approach aimed to highlight users who, despite their anticipated low susceptibility, displayed unexpectedly high SSQ scores. Additionally, it aimed to align users with inherently high MSQ scores, expecting them to exhibit correspondingly elevated SSQ scores.

5.2. SSQ Change over Time

Understanding the dynamics of motion sickness and its thresholds is critical for ensuring a comfortable and immersive virtual reality experience. In this section, we delve into the results of long-term tests, focusing on the change of the SSQ scores over time and the establishment of motion sickness thresholds.
The outcomes of extensive long-term tests provide us with valuable insights into the trajectory of SSQ changes over extended durations and the convergence of these changes. By analyzing the baseline curve of SSQ change over time, we gain an overall perspective of how SSQ scores tend to increase over prolonged periods until they level off. This baseline curve, in conjunction with the level-off period and its corresponding value, serves as a means to assess the disparity between simulation and field tests. The adaptation of simulation test results onto field tests is facilitated through this comparative approach.
Across the spectrum of tests, encompassing both field tests and simulated experiments, a notable motion sickness threshold emerges. Approximately at an SSQ score of 75–80, users exhibit signs of motion sickness-related effects. Beyond this threshold, impairing symptoms become prominent. The upper limit of SSQ scores that users can tolerate before discontinuation lies within the range of 100–120 SSQ score. The lower bounds of these lines have been visualized in Figure 7.
Visualizing these thresholds, a plotted representation includes user data recorded for durations exceeding 15 min. Instances where users crossed the green dotted threshold line of 75 SSQ score reported symptoms causing impairment. Crossing the more critical red dotted line of 100 SSQ score led users to a point where they were unable to continue. The data encompasses general trends, though it should be noted that two anomalous edge cases were excluded due to their rapid and severe SSQ score increase. These cases were attributed to erroneous tests, showcasing SSQ scores surpassing 100, and resultant impairment. In both cases, the test setup experienced frame-rate drop unrelated to the experiment, causing a rapid increase in SSQ scores.
When searching for the best fitting function for the given dataset, linear regression was first applied. This fit the dataset quite well while the experiments lasted up to 15 min. The root-mean-square deviation (RMSE) for linear regression on all of the data was 27.4 , while for the MSQ-scaled results, RMSE was 24.5 .
Once longer tests were conducted, it became apparent the SSQ results tended to level off at around 30 to 40 min. The RMSE for a logarithmic curve on all of the data was 23.9 and for the MSQ-scaled results RMSE was 20.1 . Both a polynomial as well as an exponential fit were also tested during this time. While the exponential fit had a slightly worse RMSE ( 24.1 and 20.2 ), the polynomial fit had a better RMSE ( 23.5 and 19.8 ) than the logarithmic function. Manually reviewing the fit, it was apparent the low overall scores of the longest tests greatly impacted the polynomial fit, causing it to slope downwards after the 45 min mark. As this sort of a reduction in MS was not observed on individual test subjects, only on the data as a whole, the fit was discarded as being caused by a lower number of >45 min tests.
For fitting, 61 separate test results were used, which translated into 244 data points with a mean of 24.8 min and Standard Deviation of 18.3 min.
Similar RMSE MSQ scaling patterns were found when only fitting for experiments where the tests lasted longer than 15 min. The >15 min subset contained 27 tests with 125 points with a mean of 38.4 min and Standard Deviation of 19.5 min.

5.3. Comparison of Field Test and Simulation Test Results

Field tests, conducted on race tracks and off-road tracks, mirrored the simulation’s predicted curve for the most part. These prediction curves can be observed in Figure 8 as well as the averaged user scores in Figure 9.
In total, 26 control tests were conducted on the unmodified simulated setup. The average duration of the tests was 25.68 min with a standard deviation of 13.03 min. Additionally, 26 in-field tests were conducted on the Vegvisir system. The average duration of the tests was 31.52 min with a standard deviation of 24.5 min. The unmodified simulation setup test results were used as the control value for most of the experimental setups. The blurred vignette did not use these data as the control baseline.
Visually inspecting the results, it is apparent that the raw field test outcomes induce marginally more nausea than the simulated tests. When accounting for MSQ scores, the simulated tests and the field tests were much more similar. The numeric results indicate that the simulator appears to perform poorly in the 20 to 30 min range, but manages to level out at around 40 min. This was backed by post-experiment interviews as users got used to the system and some of their SSQ symptoms were only temporarily heightened.
In the case of in field tests, the SSQ values did not follow a similar spiked pattern at 20 min but tended to be more gradual, especially when normalized by MSQ. These findings can be contextualized against the reference line, which represents aggregated data from all unaltered tests. As an additional reference, the results from a similar experiment by Benz et al. [20], where the SSQ at 50 min was measured, have also been added in red.
Participant feedback from the field tests provides further context. Participants universally reported an improved experience and greater usability of the system after 10–20 min.
All test subjects themselves preferred the field test version to the simulated version in terms of MS. Verbal responses from test subjects have indicated that the heightened realism and detailed view from the camera were prominent factors contributing to this preference. This was also visible when comparing the MS values for the subjects with the highest MSQ values, as they did not feel they crossed the previously noted MS thresholds, even if the SSQ scores indicated otherwise. This could be attributed to the subjectivity of the SSQ tests.
However, it is crucial to acknowledge certain limitations. The feedback indicates concerns related to eye strain and the weight of the VR system on the face. These factors were less prominent in the simulated setup.
The field test setup HMD has a wide focus adjustment range and improper adjustment might have been the primary cause of the eye strain, as most subjects opted not to adjust the focus. The second factor that may have had an effect on eye strain is improper fixation of the headset. If the headset is not properly fixed to the head, the movement and vibrations of the vehicle may move the HMD and cause the user to lose focus. These were known factors, but hard to account for as it requires more experience on the users part to know how to best adjust the headset to fit their needs.
This section offers a comprehensive exploration of SSQ changes over time and the establishment of motion sickness thresholds in VR environments. The findings underscore the significance of adapting tests to real-world conditions for a more accurate assessment of motion sickness responses. Further investigation into anomalies, user feedback, and system ergonomics remains imperative for enhancing the VR experience and ensuring user comfort.
To facilitate better statistical comparisons between the real-world and simulated data, a series of analytical techniques were applied. ANCOVA was employed to ascertain whether a significant difference exists between the field and simulation data sets. Finally, T-tests were conducted to compare individual data slopes, thereby determining the similarity between the two sets of slopes. Both of these tests were used to quantitatively compare the two datasets and if they could be labeled similar regardless of the visual similarity.
The ANCOVA analysis yields a p-value of 0.000112 , indicating that the datasets are significantly different. A p-value below 0.05 suggests divergence between the datasets. This divergence may be attributed to a higher percentage of people with high MSQ scores present in the simulation dataset.
Examining the T-test coefficient, a value of 0.39 indicates a negative correlation in slopes, with a p-value of 0.698 . This difference in slopes, while notable, is again likely influenced by chance and is thus insignificant.

5.4. Comparison of Long Field and Simulation Test Results

In the pursuit of a comprehensive evaluation of VR motion sickness, it becomes essential to analyze the results of both field tests and unaltered simulation tests. To counterbalance potential biases introduced by an abundance of short-duration tests, a secondary comparison was conducted between field and simulation tests exclusively comprising samples surpassing the 15 min threshold. Both curves can be observed in Figure 10 and in Figure 11, where the scores are averaged.
When only comparing the longer experiments, the results become more stable. Unfortunately, with the MSQ-scaled results, the logarithmic prediction for simulated results failed to find a good enough approximation. To compensate for this, a simple linear regression line was plotted instead.
Verbal responses by participants mentioned how the in-field setup caused less apparent nausea and was more enjoyable than the simulated setup. The numeric SSQ results for the long field tests were influenced by hot weather during the tests, causing higher reported sweating levels for the SSQ tests than on any previous experiment.
The simulated results and in-field results were also compared using ANCOVA to indicate the presence of a difference between the two data sets and T-test to compare the similarity of the slopes of the two data sets.
The ANCOVA analysis indicates a non-significant difference between the two data sets, with a p-value of 0.1127.
The T-test coefficient, registering at −2.13, denotes a pronounced reversed difference in slopes, with a p-value of 0.06. Due to the high p values in all of the tests, it unfortunately impossible to state that the field and simulated data sets are statistically similar to one another. As such, all of the results obtained on the altered setup experiments cannot be directly compared to the field test results.

5.5. Adaptation

The results of our repeated tests confirm the common understanding that states that people get used to VR over multiple exposure sessions. A total of 26 tests were conducted using the repeat experiment. Each of the three test cases had eight users evaluated, with two extra accounting for the pre-tests. All of these tests were exactly 15 min long.
In our case it was very visible that the users tested were more comfortable during the second round, as seen in Figure 12. Their motion sickness score was reduced. The test also includes the results of two people who were very strongly susceptible and had a single test conducted two weeks prior to the first test. In their case the difference between the pre-test and the later tests was much more significant. The impact of continuous exposure is especially visible when comparing the overall SSQ mean of the test, as seen in Table 1.
This suggests that first training with a VR device will improve how long a user can withstand using it before becoming uncomfortable or impaired due to motion sickness-related effects. The findings also suggest that the effects of such an exposure will linger for an extended period of time. Based on these findings, first-time VR users will experience significantly worse motion sickness effects than repeat users, indicating that first-time VR users are not a good indicator for how MS-inducing a VR device is.
During our repeated tests, we also noticed short lasting after-effects on users as mentioned by Bles et al. [36]. This was only applicable for people with high MSQ scores during the second round of tests as they had not managed to fully recover before the second experiment. During the second round of adaptation tests and past experience with testing, we have noticed that it takes users between 30 min to 1 h to fully recover from any effects caused by VR. This appeared to depend mostly on the individual users’ MS susceptibility. The third experiment was conducted with a 1-week- to 10-day-long wait period and thus is not influenced by after-effects.

5.6. Field of View

In total, 49 tests were conducted using varying FoV settings. As the subjects were randomly assigned their test FoVs, the number of samples in each category was not equal. All of these tests were exactly 15 min long.
The FoV experiment scores were scaled by MSQ and can be seen in Figure 13. The SSQ scores were also scaled by individual users to remove the effect of random chance causing more susceptible users to be given the same FoV setting.
With the data scaled by users, the results suggest there is a minor local minimum of motion sickness at 49/54 deg and two very significant local minima at 26 deg and 84 deg. The very low FoV performing well is explained by very susceptible users who tended to prefer lower FoV settings in general. This coincides with the findings of Teixeira et al. [37] where FoV reduction on MS susceptible people had a very significant impact on MS reduction. The 84 deg FoV generating a significant minimum is currently unexplained and can only be explained by a lack of data. The primary local minima being between 49 and 54 deg is also similar to the findings of Kala et al. [35], where lower FoV values down to 60 deg were suggested and values below 45 deg proved to be hampering.
The raw SSQ values, shown in Table 2, indicate very similar results but with a larger preference for 49 and 84 deg FoV values than the MSQ-scaled or user-scaled versions.

5.7. Blur

For the blur test, 19 tests were conducted: 10 of these accounted for the actual blurred setup and 9 were separate control conditions. All of the test lengths were 15 min.
The raw SSQ results in Figure 14 indicate that there is no distinguishable impact caused by blur on MS. Scaling the results by MSQ, shows the blurring effect impacted MS negatively. This was backed by user feedback and the results are even more extreme for people with already high MSQ scores.
This indicates that there is a minor impact on the users’ motion sickness if their vision is even slightly blurred in the middle of their view.
Please note, this test only evaluated blur in the center of vision and not the periphery which has shown to have different results. This means that the HMD screen has to be properly calibrated for each user, any and all blur will have an impact on the user’s ability to use the system for extended periods of time. Secondly this also means the lenses of the headset must be kept clean, as smudges and fingerprints are likely in an in-field VR application.

5.8. Vignette

For the vignette test, 16 tests were conducted. All of these tests lasted 15 min.
The SSQ scores in Figure 15 indicate that the 54 deg black vignette appeared to perform the best, with the other black vignette and 26 deg blurred periphery following close behind. Verbally, all participants who tested both setups also indicated their preference to the black vignette over the blurred version.

5.9. Distortion

With the distortion setup, 14 tests were conducted. All of these tests were exactly 15 min long.
Based on the conducted tests shown in Figure 16, the pincushion effect had more of an effect than the barrel distortion. Neither of the tested effects exhibited very noticeable effects on the test subjects when questioned after the tests. This suggests that if there are distortions present, they do not have to be corrected very precisely in order to preserve computational complexity.

5.10. Illumination

The illumination test was conducted eight times with four additional control tests with overlapping subjects. The new control tests were conducted to see if a small change in the lighting system had a noticeable impact on our setup but no differences were observed. All of these tests were exactly 15 min long and the control tests had a mean length of 18.75 min with a standard deviation of 2.5 min.
From the tests we conducted, there seems to be a very minor difference between users being exposed to cameras with varying illumination settings and uniformly illuminated cameras. The recorded SSQ scores indicate that illumination variance may have a subtle effect on MS when compared to the control setting. Verbal responses indicated a lack of a visible difference or affirmed the findings of Kala et al. [35] that increased illumination and overexposure increase the effects of MS. The findings of the illumination experiment can be seen in Figure 17 and in the Table 3.
In practice the users tend to focus on a single camera view and rarely view the overlapping area between two cameras. This, along with the users preferring zoomed in view of the sphere, tends to reduce the impact of variance between the different camera feeds. The same user preference was observed with the camera distortion tests where the main impact was near the area where two camera feeds were stitched together.

6. Additional Findings

Sakai et al. [38] has mentioned in their findings that a user’s sense of speed is greatly influenced by the FoV of the headset. There are also contradictory findings by Perrin et al. [39], where users were able to very accurately estimate their speed in VR.
In our case, users participating in the field tests did encounter the influence of FoV when estimating the speed of the vehicle. A larger FoV caused users to underestimate the actual speed while a lower FoV caused them to overestimate it. The difference was minor compared to Sakai et al. but this was likely affected by other cues like the engine sounds, familiarity to the vehicle and car vibrations that helped the user better estimate their speed.

7. Conclusions

This research was aimed at improving the user experience of a mixed-reality situational awareness system. The results of this study can be applicable for VR applications that display a live pass through camera feed for the user, but, as this research was primarily conducted on a particular system, they cannot be generalized for general-purpose VR systems. Additionally, it should be noted that the simulation results were primarily conducted on 17–28 years old participants. Field test results and their comparisons were conducted on a larger age range.
In order to quantify the effect different changes made to the system, the Simulator Sickness Questionnaire was used as a primary metric with user feedback confirming the results. As SSQ is both a subjective metric and is very dependent on a user’s inherent motion sickness susceptibility, the Motion Sickness Susceptibility Questionnaire was used to normalize the results.
Our findings indicate that MSQ values can be used on SSQ recordings in order to normalize the results across users with varying inherent motion sickness susceptibility.
We have also found that a user’s motion sickness level increases rapidly in the first 10–20 min of contact with VR and levels out at around 40–50 min. This pattern was best described and approximated with a logarithmic curve.
We observed two motion sickness thresholds. Crossing the first was observed to be detrimental to usability and the second one rendered the user unable to continue. In terms of SSQ scores, these values were found to be at around 75–80 and 100–120, respectively. The specific threshold value varied by user due to the subjective nature of SSQ.
Repeated exposure proved to have a positive impact on motion sickness, even when the time between different VR sessions was less than the general recovery time.
Our tests concluded that a preferable VR setup for an MRSAS application has a FoV between 49–54 deg with a black periphery. Any blurring on the screen or even on the periphery did not have a positive effect on MS.
Camera-based effects on performance caused by lens distortion or automatic white balance were shown to be negligible, as the primary impact of the effects was concentrated on the overlapping or stitched edges of different camera feeds.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app14062231/s1.

Author Contributions

Conceptualization, G.A. and R.S.; methodology, R.E.H.; software, R.E.H., V.P., T.L. and A.P.; validation, R.E.H., A.P. and R.S.; formal analysis, N.M. and V.P.; investigation, R.E.H., N.M. and V.P.; resources, R.S., T.L. and A.P.; data curation, R.E.H.; writing—original draft preparation, R.E.H. and G.A.; writing—review and editing, R.E.H.; visualization, R.E.H.; supervision, G.A., R.E.H. and R.S.; project administration, G.A.; funding acquisition, G.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by and in cooperation with Vegvisir (Defensphere OÜ).

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to this article not containing medical research, no personal data nor identifiable medical data being gathered and all subjects consenting to the experiments prior to putting on the headsets.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest. The research was conducted in order to improve a specific VR system and the results may be beneficial for the development of other similar systems.

Abbreviations

SSQSimulator Sickness Questionnaire
MSQMotion Sickness Susceptibility Questionnaire
HMDHead-Mounted Display
MSMotion Sickness
VRVirtual Reality
FoVField of View

References

  1. Hwang, A.D.; Peli, E. Instability of the perceived world while watching 3D stereoscopic imagery: A likely source of motion sickness symptoms. i-Perception 2014, 5, 515–535. [Google Scholar] [CrossRef]
  2. Patrão, B.; Pedro, S.; Menezes, P. How to Deal with Motion Sickness in Virtual Reality. In Proceedings of the 22o Encontro Português de Computação Gráfica e Interação 2015; Dias, P., Menezes, P., Eds.; The Eurographics Association: Eindhoven, The Netherlands, 2020; ISBN 978-3-03868-128-1. [Google Scholar] [CrossRef]
  3. Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
  4. Chang, E.; Kim, H.T.; Yoo, B. Virtual reality sickness: A review of causes and measurements. Int. J. Hum. Comput. Interact. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
  5. Tošić, I.; Hoffman, D.; Balram, N. Effect of latency on simulator sickness in smartphone virtual reality. J. Soc. Inf. Disp. 2021, 29, 561–572. [Google Scholar] [CrossRef]
  6. Keshavarz, B.; Hecht, H. Validating an efficient method to quantify motion sickness. Hum. Factors 2011, 53, 415–426. [Google Scholar] [CrossRef] [PubMed]
  7. Frank, L.; Kennedy, R.S.; Kellogg, R.S.; McCauley, M.E.; FL, E.C.O. Simulator sickness: A reaction to a transformed perceptual world. 1. scope of the problem. In Proceedings of the Second Symposium of Aviation Psychology; Ohio State University: Columbus, OH, USA, 1983; pp. 25–28. [Google Scholar]
  8. Kim, S.K.; Kang, S.J.; Choi, Y.J.; Choi, M.H.; Hong, M. Augmented-Reality Survey: From Concept to Application. KSII Trans. Internet Inf. Syst. 2017, 11, 982–1004. [Google Scholar]
  9. Onsel, I.; Donati, D.; Stead, D.; Chang, O. Applications of virtual and mixed reality in rock engineering. In Proceedings of the ARMA US Rock Mechanics/Geomechanics Symposium, ARMA, Seattle, WA, USA, 17–20 June 2018; p. ARMA-2018-798. [Google Scholar]
  10. Hu, H.Z.; Feng, X.B.; Shao, Z.W.; Xie, M.; Xu, S.; Wu, X.H.; Ye, Z.W. Application and prospect of mixed reality technology in medical field. Curr. Med Sci. 2019, 39, 1–6. [Google Scholar] [CrossRef] [PubMed]
  11. De Armas, C.; Tori, R.; Netto, A.V. Use of virtual reality simulators for training programs in the areas of security and defense: A systematic review. Multimed. Tools Appl. 2020, 79, 3495–3515. [Google Scholar] [CrossRef]
  12. Coelho, L.P.; Freitas, I.; Kaminska, D.U.; Queirós, R.; Laska-Lesniewicz, A.; Zwolinski, G.; Raposo, R.; Vairinhos, M.; Pereira, E.T.; Haamer, E.; et al. Virtual and augmented reality awareness tools for universal design: Towards active preventive healthcare. In Emerging Advancements for Virtual and Augmented Reality in Healthcare; IGI Global: Hershey, PA, USA, 2022; pp. 11–24. [Google Scholar]
  13. Arena, F.; Collotta, M.; Pau, G.; Termine, F. An overview of augmented reality. Computers 2022, 11, 28. [Google Scholar] [CrossRef]
  14. Kamińska, D.; Zwoliński, G.; Laska-Leśniewicz, A.; Raposo, R.; Vairinhos, M.; Pereira, E.; Urem, F.; Hinic, M.L.; Haamer, R.E.; Anbarjafari, G. Augmented Reality: Current and New Trends in Education. Electronics 2023, 12, 3531. [Google Scholar] [CrossRef]
  15. Zwoliński, G.; Kamińska, D.; Haamer, R.E.; Coelho, L.F.; Anbarjafari, G. Enhancing empathy through virtual reality: Developing a universal design training application for students. Med. Pr. 2023, 74, 199–210. [Google Scholar] [CrossRef]
  16. Li, J.; Fan, M.; Wang, G.; Li, X.; Sun, R. Panorama video stitching system based on VR Works 360 video. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 715–720. [Google Scholar]
  17. Allen, R.C. The Effect of Restricted Field of View on Locomotion Tasks, Head Movements, and Motion Sickness; University of Central Florida: Orlando, FL, USA, 2000. [Google Scholar]
  18. Serrano, A.; Kim, I.; Chen, Z.; DiVerdi, S.; Gutierrez, D.; Hertzmann, A.; Masia, B. Motion parallax for 360 RGBD video. IEEE Trans. Vis. Comput. Graph. 2019, 25, 1817–1827. [Google Scholar] [CrossRef]
  19. Kemeny, A. From driving simulation to virtual reality. In Proceedings of the 2014 Virtual Reality International Conference, Laval, France, 9–11 April 2014; pp. 1–5. [Google Scholar]
  20. Benz, T.M.; Riedl, B.; Chuang, L.L. Projection displays induce less simulator sickness than head-mounted displays in a real vehicle driving simulator. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21–25 September 2019; pp. 379–387. [Google Scholar]
  21. Cho, H.j.; Kim, G.J. Roadvr: Mitigating the effect of vection and sickness by distortion of pathways for in-car virtual reality. In Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology, Ottawa, ON, Canada, 1–4 November 2020; pp. 1–3. [Google Scholar]
  22. Lim, K.; Lee, J.; Won, K.; Kala, N.; Lee, T. A novel method for VR sickness reduction based on dynamic field of view processing. Virtual Real. 2021, 25, 331–340. [Google Scholar] [CrossRef]
  23. Jasper, A.; Cone, N.; Meusel, C.; Curtis, M.; Dorneich, M.C.; Gilbert, S.B. Visually induced motion sickness susceptibility and recovery based on four mitigation techniques. Front. Virtual Real. 2020, 1, 582108. [Google Scholar] [CrossRef]
  24. Yoon, H.J.; Moon, H.S.; Sung, M.S.; Park, S.W.; Heo, H. Effects of prolonged use of virtual reality smartphone-based head-mounted display on visual parameters: A randomised controlled trial. Sci. Rep. 2021, 11, 15382. [Google Scholar] [CrossRef] [PubMed]
  25. Lim, C.H.; Lee, S.C. The Effects of Degrees of Freedom and Field of View on Motion Sickness in a Virtual Reality Context. Int. J. Hum.—Comput. Interact. 2023, 1–13. [Google Scholar] [CrossRef]
  26. Kirollos, R.; Merchant, W. Comparing cybersickness in virtual reality and mixed reality head-mounted displays. Front. Virtual Real. 2023, 4, 1130864. [Google Scholar] [CrossRef]
  27. Goedicke, D.; Bremers, A.W.; Lee, S.; Bu, F.; Yasuda, H.; Ju, W. XR-OOM: MiXed Reality driving simulation with real cars for research and design. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April 30–5 May 2022; pp. 1–13. [Google Scholar]
  28. McGill, M.; Li, G.; Ng, A.; Bajorunaite, L.; Williamson, J.; Pollick, F.; Brewster, S. Augmented, Virtual and Mixed Reality Passenger Experiences. In User Experience Design in the Era of Automated Driving; Springer International Publishing A&G: Cham, Switzerland, 2022; pp. 445–475. [Google Scholar]
  29. Keshavarz, B.; Hecht, H.; Zschutschke, L. Intra-visual conflict in visually induced motion sickness. Displays 2011, 32, 181–188. [Google Scholar] [CrossRef]
  30. Zielasko, D.; Meißner, A.; Freitag, S.; Weyers, B.; Kuhlen, T.W. Dynamic field of view reduction related to subjective sickness measures in an HMD-based data analysis task. In Proceedings of the IEEE VR Workshop on Everyday Virtual Reality, Reutlingen, Germany, 18 March 2018; Volume 2. [Google Scholar]
  31. Bala, P.; Dionísio, D.; Nisi, V.; Nunes, N. Visually induced motion sickness in 360 videos: Comparing and combining visual optimization techniques. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; pp. 244–249. [Google Scholar]
  32. Bos, J.E.; de Vries, S.C.; van Emmerik, M.L.; Groen, E.L. The effect of internal and external fields of view on visually induced motion sickness. Appl. Ergon. 2010, 41, 516–521. [Google Scholar] [CrossRef]
  33. Bonato, F.; Bubka, A.; Thornton, W. Visual blur and motion sickness in an optokinetic drum. Aerosp. Med. Hum. Perform. 2015, 86, 440–444. [Google Scholar] [CrossRef]
  34. Nie, G.Y.; Duh, H.B.L.; Liu, Y.; Wang, Y. Analysis on mitigation of visually induced motion sickness by applying dynamical blurring on a user’s retina. IEEE Trans. Vis. Comput. Graph. 2019, 26, 2535–2545. [Google Scholar] [CrossRef] [PubMed]
  35. Kala, N.; Lim, K.; Won, K.; Lee, J.; Lee, T.; Kim, S.; Choe, W. P-218: An approach to reduce VR sickness by content based field of view processing. In Proceedings of the SID Symposium Digest of Technical Papers; Wiley Online Library: San Francisco, CA, USA, 2017; Volume 48, pp. 1645–1648. [Google Scholar]
  36. Bles, W.; Wertheim, A. Appropriate Use of Virtual Environments to Minimise Motion Sickness; Human Factors Research Inst TNO Soesterberg: Soesterberg, Netherlands, 2001; 10p, Available online: https://apps.dtic.mil/sti/pdfs/ADP010785.pdf (accessed on 16 October 2023).
  37. Teixeira, J.; Palmisano, S. Effects of dynamic field-of-view restriction on cybersickness and presence in HMD-based virtual reality. Virtual Real. 2021, 25, 433–445. [Google Scholar] [CrossRef]
  38. Sakai, Y.; Watanabe, T.; Ishiguro, Y.; Nishino, T.; Takeda, K. Effects on user perception of a `modified’ speed experience through in-vehicle virtual reality. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, Utrecht, The Netherlands, 22–25 September 2019; pp. 166–170. [Google Scholar]
  39. Perrin, T.; Kerhervé, H.A.; Faure, C.; Sorel, A.; Bideau, B.; Kulpa, R. Enactive approach to assess perceived speed error during walking and running in virtual reality. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 622–629. [Google Scholar]
Figure 1. Images showing the 3 types of FoV present in the system. (a) is the FoV of the physical camera, the image of which is projected inside a virtual sphere. (b) is the FoV of the virtual camera inside the virtual sphere; this camera feed is directly sent to the VR HMD. (c) is the FoV inherent to the VR HMD itself.
Figure 1. Images showing the 3 types of FoV present in the system. (a) is the FoV of the physical camera, the image of which is projected inside a virtual sphere. (b) is the FoV of the virtual camera inside the virtual sphere; this camera feed is directly sent to the VR HMD. (c) is the FoV inherent to the VR HMD itself.
Applsci 14 02231 g001
Figure 2. A flowchart of where the camera feed flows in the system based on the 3 components mentioned in Figure 1. The physical (or virtual) cameras render their captured feed onto the virtual sphere (b). A VR viewer in the virtual sphere (b) captures this feed and sends it to the physical VR HMD in (c). In the in field setup, (a,c) are physical while (b) is virtual. In the simulated setup, (a,b) are simulated while only (c) is physical.
Figure 2. A flowchart of where the camera feed flows in the system based on the 3 components mentioned in Figure 1. The physical (or virtual) cameras render their captured feed onto the virtual sphere (b). A VR viewer in the virtual sphere (b) captures this feed and sends it to the physical VR HMD in (c). In the in field setup, (a,c) are physical while (b) is virtual. In the simulated setup, (a,b) are simulated while only (c) is physical.
Applsci 14 02231 g002
Figure 3. A rendering of the VR dome for the Vegvisir in field setup.
Figure 3. A rendering of the VR dome for the Vegvisir in field setup.
Applsci 14 02231 g003
Figure 4. Images showing a comparison between the simulation (A,C), and a photo taken with the physical camera (B,D), the pairs (A,B) as well as the pair (C,D) were both taken at roughly the same distances. The center 4 images are the calibration markers from the corresponding images at different distances to calibrate the virtual focal length.
Figure 4. Images showing a comparison between the simulation (A,C), and a photo taken with the physical camera (B,D), the pairs (A,B) as well as the pair (C,D) were both taken at roughly the same distances. The center 4 images are the calibration markers from the corresponding images at different distances to calibrate the virtual focal length.
Applsci 14 02231 g004
Figure 5. The function parameter s for SSQ normalization using MSQ.
Figure 5. The function parameter s for SSQ normalization using MSQ.
Applsci 14 02231 g005
Figure 6. The function parameter s for SSQ normalization using MSQ but only calibrated on experiments that lasted over 15 min.
Figure 6. The function parameter s for SSQ normalization using MSQ but only calibrated on experiments that lasted over 15 min.
Applsci 14 02231 g006
Figure 7. Logarithmic, polynomial, exponential curvature and linear regression lines on the gathered SSQ data. The top plot contains the raw measurements, the bottom two are MSQ scaled with MSQ being set to 0 and 5.0, the median in the data set. The green dotted line denotes the first impairment line and the red dotted line denotes the approximate upper limit. The magenta dots are raw measurement samples. This figure and all following figures have time in minutes as the x-axis.
Figure 7. Logarithmic, polynomial, exponential curvature and linear regression lines on the gathered SSQ data. The top plot contains the raw measurements, the bottom two are MSQ scaled with MSQ being set to 0 and 5.0, the median in the data set. The green dotted line denotes the first impairment line and the red dotted line denotes the approximate upper limit. The magenta dots are raw measurement samples. This figure and all following figures have time in minutes as the x-axis.
Applsci 14 02231 g007
Figure 8. Logarithmic curvature prediction of the SSQ results comparing the in field test to the simulation. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Figure 8. Logarithmic curvature prediction of the SSQ results comparing the in field test to the simulation. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Applsci 14 02231 g008
Figure 9. Mean SSQ results comparing the in field test to the simulation. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Figure 9. Mean SSQ results comparing the in field test to the simulation. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Applsci 14 02231 g009
Figure 10. Logarithmic curvature prediction of the SSQ results comparing the in field test to the simulation. Only tests lasting longer than 15 min were included. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Figure 10. Logarithmic curvature prediction of the SSQ results comparing the in field test to the simulation. Only tests lasting longer than 15 min were included. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Applsci 14 02231 g010
Figure 11. Mean SSQ results comparing the in-field test to the simulation. Only tests lasting longer than 15 min were included. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Figure 11. Mean SSQ results comparing the in-field test to the simulation. Only tests lasting longer than 15 min were included. The red SSQ line, marked with a ∗, is from a similar experiment by Benz et al. [20].
Applsci 14 02231 g011
Figure 12. Mean SSQ results for the repeated tests. The pre-test was conducted on users who had no prior experience in VR. The 1st and 2nd tests were 1 h 15 min apart. The 2nd and 3rd tests were more than a week apart.
Figure 12. Mean SSQ results for the repeated tests. The pre-test was conducted on users who had no prior experience in VR. The 1st and 2nd tests were 1 h 15 min apart. The 2nd and 3rd tests were more than a week apart.
Applsci 14 02231 g012
Figure 13. The top figure shows the raw results for each tested FoV setting and the bottom one is individually scaled for each users results.
Figure 13. The top figure shows the raw results for each tested FoV setting and the bottom one is individually scaled for each users results.
Applsci 14 02231 g013
Figure 14. Mean SSQ results for the blur test. Both the new control and general reference line have been included.
Figure 14. Mean SSQ results for the blur test. Both the new control and general reference line have been included.
Applsci 14 02231 g014
Figure 15. Mean SSQ test results on the top figure and the individually scaled results for each user on the bottom.
Figure 15. Mean SSQ test results on the top figure and the individually scaled results for each user on the bottom.
Applsci 14 02231 g015
Figure 16. Mean SSQ results for barrel and pincushion distortion effects on the camera feeds.
Figure 16. Mean SSQ results for barrel and pincushion distortion effects on the camera feeds.
Applsci 14 02231 g016
Figure 17. Mean SSQ results for exaggerated over- and under-exposure conditions compared against a control where only automatic white balance was used.
Figure 17. Mean SSQ results for exaggerated over- and under-exposure conditions compared against a control where only automatic white balance was used.
Applsci 14 02231 g017
Table 1. Mean and standard deviation of all recorded SSQ values for unaltered simulator and real life experiments. The field vs. sim tests consisted of 52 samples and the adaptation test consisted of 26 samples.
Table 1. Mean and standard deviation of all recorded SSQ values for unaltered simulator and real life experiments. The field vs. sim tests consisted of 52 samples and the adaptation test consisted of 26 samples.
Test TypeSubtypeRAW MeanRAW StdMSQ MeanMSQ Std
Field vs. Simfield23.1928.2621.0421.02
Field vs. Simsim23.329.5322.334.39
Field vs. Sim > 15 minfield30.0630.9725.2721.54
Field vs. Sim > 15 minsim31.5837.639.455.83
Adaptationpre51.5246.934.2134.68
Adaptation1st20.823.3917.0618.97
Adaptation2nd17.7727.7514.0621.59
Adaptation3rd15.1917.3710.7212.17
Table 2. Mean and standard deviation of all recorded SSQ values for HMD-based changes. The FoV tests consisted of 49 samples, blur consisted of 10 + 9 control samples and the vignette test consisted of 16 samples.
Table 2. Mean and standard deviation of all recorded SSQ values for HMD-based changes. The FoV tests consisted of 49 samples, blur consisted of 10 + 9 control samples and the vignette test consisted of 16 samples.
Test TypeSubtypeRAW MeanRAW StdMSQ MeanMSQ Std
FoV test26 deg28.9837.0915.1512.79
FoV test30 deg20.6917.9619.827.74
FoV test34 deg26.9232.5538.5967.62
FoV test44 deg22.1724.6318.5819.78
FoV test49 deg19.8724.4818.8121.94
FoV test54 deg22.626.3515.1717.98
FoV test60 deg32.4936.7443.2578.98
FoV test63 deg20.4124.6914.6517.57
FoV test84 deg11.5316.658.9813.02
Blurcontrol20.6726.817.3122.11
Blurblur20.9424.4223.7642.34
Vignetteblack 26 deg45.3539.78--
Vignetteblack 54 deg31.4231.59--
Vignetteblur 26 deg46.7550.47--
Vignetteblur 54 deg58.0941.6--
Table 3. Mean and standard deviation of all recorded SSQ values for camera-based distortions. The camera distortion tests consisted of 14 samples and the illumination test consisted of 8 samples with 4 additional control samples.
Table 3. Mean and standard deviation of all recorded SSQ values for camera-based distortions. The camera distortion tests consisted of 14 samples and the illumination test consisted of 8 samples with 4 additional control samples.
Test TypeSubtypeRAW MeanRAW StdMSQ MeanMSQ Std
Camera distortionbarrel6.98.427.8713.37
Camera distortionpincushion8.4210.548.413.16
Camera illuminationcontrol15.920.0313.9416.78
Camera illuminationillum20.9424.4223.7642.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Haamer, R.E.; Mikhailava, N.; Podliesnova, V.; Saremat, R.; Lusmägi, T.; Petrinec, A.; Anbarjafari, G. Motion Sickness in Mixed-Reality Situational Awareness System. Appl. Sci. 2024, 14, 2231. https://doi.org/10.3390/app14062231

AMA Style

Haamer RE, Mikhailava N, Podliesnova V, Saremat R, Lusmägi T, Petrinec A, Anbarjafari G. Motion Sickness in Mixed-Reality Situational Awareness System. Applied Sciences. 2024; 14(6):2231. https://doi.org/10.3390/app14062231

Chicago/Turabian Style

Haamer, Rain Eric, Nika Mikhailava, Veronika Podliesnova, Raido Saremat, Tõnis Lusmägi, Ana Petrinec, and Gholamreza Anbarjafari. 2024. "Motion Sickness in Mixed-Reality Situational Awareness System" Applied Sciences 14, no. 6: 2231. https://doi.org/10.3390/app14062231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop