Tutorial
Abstract
The use of virtual reality (VR) stimulation in clinical settings has increased in recent years. In particular, there has been increasing interest in the use of VR stimulation for a variety of purposes, including medical training, pain therapy, and relaxation. Unfortunately, there is still a limited amount of real-world 360-degree content that is both available and suitable for these applications. Therefore, this tutorial paper describes a pipeline for the creation of custom VR content. It covers the planning and designing of content; the selection of appropriate equipment; the creation and processing of footage; and the deployment, visualization, and evaluation of the VR experience. This paper aims to provide a set of guidelines, based on first-hand experience, that readers can use to help create their own 360-degree videos. By discussing and elaborating upon the challenges associated with making 360-degree content, this tutorial can help researchers and health care professionals anticipate and avoid common pitfalls during their own content creation process.
JMIR Med Educ 2023;9:e42154doi:10.2196/42154
Keywords
Introduction
In recent years, there has been an increasing interest in using immersive virtual reality (VR) technology in the clinical setting. This is a continuously growing field, and applications within the clinical setting include, among others, preparedness and medical training for staff, familiarization with the hospital setting, pain treatment, and anxiety treatment [
- ]. Specifically in the intensive care unit, the majority of research done using VR has examined its use with patients as a tool for relaxation [ ]. This is followed by its use for delirium prevention in patients; however, the approaches were similar in that they also used VR as a relaxation tool [ ].Due to the increased interest and numerous potential applications for this technology, a group of 21 international VR experts recently worked together to develop a set of standards for best practices [
]. The standards aim to provide guidance when attempting to conduct VR treatments in health care as well as translate findings from VR research into practical applications [ ]. The Virtual Reality Clinical Outcomes Research Experts (VR-CORE) committee defined 3 phases that should be used when designing VR clinical studies, starting with content development [ ]. The VR-CORE members specifically suggested the use of human-centered design, emphasizing that patients and providers should be involved. This is related to the finding that personalization—allowing the participant to make decisions on various aspects of VR content—can contribute to the level of relaxation and engagement experienced by the user [ ]. Furthermore, previous studies have found that the effects of VR across a variety of applications, such as pain therapy and relaxation, are greater when using immersive VR technologies compared with other media such as television screens or headphones [ - ]. Consequently, the application of VR technology ideally requires 360-degree videos that are tailored to their intended purposes.While specialized companies can be hired to create custom 360-degree videos to suit the specifications of a given project, these concepts and technologies are still relatively new, and their services often come at a premium. Alternatively, immersive content can be purchased on the internet, but this option has its fair share of limitations. While more affordable, users of web-based content must consider the potential licensing issues and royalties associated with its use. Furthermore, videos purchased on the internet may also have limited customizability, such as the length of the content (which is typically restricted to a few minutes), or the location depicted. It may also not be possible to customize certain aspects of the purchased videos, such as the addition of audio tracks (eg, voice-guided meditation) [
, ]. Thus, one potential way of overcoming these limited options is to generate user-created 360-degree video content.While previous tutorials have outlined how to create 360-degree VR content for training and environmental familiarization, they are limited in their applicability as well as the robustness of the methods described [
, ]. Specifically, these tutorials assume that the creator has access to a controlled environment with minimal risk of interaction with uncontrollable environmental factors. Additionally, these tutorials have failed to address certain steps that are vital for working with 360-degree videos and instead outsourced these steps or used the built-in software provided with the device as a workaround. The limited scope of these existing tutorials, especially in light of the recommendations of the VR-CORE group, highlights a gap in the literature regarding the creation of customizable in-house VR content that does not require the user to outsource certain aspects of the work or hire expensive companies.The goal of this tutorial paper is to provide readers, be they researchers or health care professionals interested in applying VR in a clinical setting, with a pipeline that can be used to create custom 360-degree videos. Moreover, the goal is that the pipeline can be used for a variety of applications across various levels of expertise and multiple target populations. The methods and advice provided in this paper are based on the study team’s first-hand experience of creating 30-minute, 360-degree nature videos that were subsequently shown to patients with critical illness in an intensive care unit using a head-mounted display (HMD). While the focus of our work was the creation of relaxing scenes that featured nature, the steps outlined below can be easily generalized and used to record content that is more suitable for different purposes, such as pain therapy or distraction, in which the 360-degree exploration of a given setting is desired [
, , , - ].Editing Pipeline
Overview
The pipeline presented in this paper focuses on 5 main aspects that must be considered when creating 360-degree videos (
).The first aspect is the planning and designing of the content, which requires making decisions on a variety of parameters, such as the duration of the content, as well as its visual and auditory components. The second aspect covers the auditory and visual equipment necessary as well as how to record the footage. The third aspect involves the creation and processing of the final footage, with a specific focus on how to combine the recorded content into a single 360-degree video and postprocessing considerations. The fourth phase discusses the hardware that should be used for deployment and visualization; it includes considerations regarding the visualization of the 360-degree content, the choice of VR device, and hygiene precautions. Finally, this tutorial discusses how to evaluate the VR experience. Additional detailed descriptions of the experimental setup as well as potential use cases following this pipeline can be found in
.Planning and Designing of the Content
Overview
Before beginning to record any content, it is important to know what types of scenes should be recorded and their duration; this will depend on the overall purpose of the VR content. For example, if planning to use VR for relaxation purposes in patients with critical illness, the literature suggests that the duration should ideally be around 10-15 minutes [
]. On the other hand, if VR is to be used as a meditation tool to improve sleep in patients with critical illness, then a duration of 30 minutes may be better suited to that purpose [ ]. Relevant points to consider are the visual content, the auditory content, and the duration of the content.Visual Content
The types of scenes to be recorded will depend on the purpose of the content. Specifically, if the content is intended to provide a distraction, then engaging content such as cartoons or interactive scenes, as have been used for pain therapy, may be the best choice of content [
, , ]. If, however, someone wants to be relaxed, a calm nature scene may be more suitable than an urban environment [ , , ]. Based on this, outlines several important questions that should be considered when deciding what type of content should be recorded.Questions | Considerations |
What type of activity should be present? (eg, human activity, animal activity, nonhuman activity) |
|
How much activity should be present? (eg, constant activity, intermittent activity) |
|
How quickly should the activity be occurring or should the scene be changing? |
|
Should the recording be static or dynamic? |
|
What about cybersickness? |
|
aVR: virtual reality.
Auditory Content
Like visual content, the type of auditory content that should be included is dependent on the goals of the video (eg, voice-guided meditation or relaxing nature sounds) [
, , ]. outlines several important questions that should be considered when recording and postprocessing 360-degree content.Questions | Considerations |
What type of auditory content should be presented? |
|
Should the auditory input be monophonic or stereophonic? |
|
At what volume should the auditory input be presented? |
|
Duration of the Content
A final component to consider is the length of the content as well as how it relates to both the visual and auditory content. The ideal video duration is dependent on how the final video will be used as well as the content itself. For example, if the goal is to distract people, as is often the goal in pain therapy [
], then individual videos could be shorter. In contrast, if the goal of the video is to induce a relaxation effect—for example, by showing them a sunset—then the video should be long enough to capture this event. In general, the overall length of the video can be adjusted by adding or removing video footage. Alternatively, the length can be adjusted by ensuring that it can be looped, that is, the end of the video can seamlessly transition into the start of the video without a perceptible difference in the content. In this way, the duration of the footage can be adapted at a later stage (eg, during postprocessing), adding another layer of personalization.Equipment and Recording of the VR Content
Visual Equipment
As the technology available for recording 360-degree videos continues to evolve, there are an increasing number of commercially available cameras. There are 2 main types of cameras: monoscopic and stereoscopic cameras. Monoscopic VR uses multiple monoscopic cameras attached to a rig to film multiple fields of view that are stitched together in the postprocessing stage. Stereoscopic VR also uses multiple cameras to film multiple fields of view, with the exception being that, in stereoscopic cameras, there is a lens assigned to each eye. In this way, stereoscopic cameras can generate 3D content that cannot be achieved using a monoscopic rig unless special postprocessing techniques are used.
In addition to the camera itself, there are additional accessories that are required. Videos recorded using 360-degree cameras can be extremely large depending on the resolution, frame rate, and duration of the recording and may require additional storage solutions as well as a powerful computer for processing. The exact specifications will depend on the camera used and the video files generated; a more detailed example can be found in
. The camera may also require supplementary batteries or an additional power source if outdoor scenes are being captured. The environmental conditions may also necessitate the purchase of additional accessories related to wind or rain protection.Finally, as this setup may appear intriguing to passing individuals, it may be useful to consider some protective measures. Specifically, it may be useful to attach signs to the camera tripod, warning individuals to refrain from approaching or touching the camera, as this may disrupt the recording and have a disorienting effect on the viewer. Florescent cones can also be placed around the base of the tripod to prevent individuals from approaching the camera or accidentally knocking the camera down (
). Any person or object that comes close to the camera can cause discomfort for the end user, as they might feel that the object approached too close to them. Lastly, as the camera captures footage of its entire surroundings, there is no way for someone who does not wish to be filmed to pass by undetected. To prevent problems associated with this issue, it can be useful to place signs at an appropriate distance from the camera that warn passersby that they will be filmed if they continue on this path. Individuals who do not wish to be filmed can then choose an alternative route.Recording Perspective
When filming on location, the easiest and most flexible way of positioning the camera to capture scenes is through the use of a camera tripod. There are two recommended heights that should be used to guarantee a natural viewing experience: a height that is comparable to someone who is standing or a height that is comparable to someone who is seated on the ground. Selecting 1 of these 2 heights will increase the likelihood that the user will experience the scenes as if they were truly present at that location (
).Audio Equipment
There are 4 main methods by which auditory content can be added to 360-degree videos. First, one may choose to film the visual content using devices that have built-in microphones. This option allows for the most seamless combination of visual and auditory content. However, recording the auditory content may not be straightforward, depending on the equipment available and used. For example, cameras that have a built-in cooling fan may result in poor-quality audio recordings. Additionally, it is impossible to customize microphones that are built into the camera; users may wish to use an external microphone to record the soundscape.
There are many different devices available depending on budget and the desired specifications, which will be dependent on the content considerations discussed above. Care must be taken to find a recording device with a suitable range that is capable of picking up on activities in close proximity but not extraneous sounds, such as sounds from a distant highway [
]. Additional characteristics that should be considered include the power supply (ie, cabled or battery-powered), internal storage capabilities, and wind protection. The latter is particularly important for outdoor recordings, as even the slightest gust of wind can be heard on audio recordings [ ]. If the video content is expected to include a vocal track—for example, to allow for a guided meditation routine or to act out a scene—it may be important to consider how this voice will be captured by the device. A better solution may be to record these audio tracks separately and add them to the video during postprocessing.In cases where it is not possible to record sounds on location, sound clips can be stitched together using dedicated software such as Audacity (Audacity, Inc). However, finding available sound sources and creating an audio clip is not always easy. Furthermore, ensuring that transitions between clips are undetectable and ensuring proper fading can be challenging. Discrepancies that cannot be heard during postprocessing may be noticeable when played on higher-quality devices. This makes creating high-quality audio clips difficult and time-consuming.
Therefore, a final option for adding auditory content to the videos is the purchase of professionally recorded sounds. As with the video content, longer-duration, nonlooping content may be limited, though it is not impossible to obtain. It should be noted that the misalignment of audio and video is detectable by a trained ear, and certain viewers may be perturbed if actions such as footsteps can be seen but not heard [
].The Creation and Processing of the Final 360-Degree Footage
Recording the desired content using a 360-degree camera is only part of the process. The stitching and postprocessing of the footage are both equally important components of video creation and may require special consideration, as outlined below.
Stitching and Auto-Stitching
As 360-degree VR recordings use multiple cameras, as described above, stitching is an important part of the postprocessing step. During stitching, the videos recorded from each camera are merged into a single file, such that there is no clear start or end to the visual field as the user turns around. However, the process of merging these videos is not trivial, as the recorded videos have overlapping fields of view. This means that each lens captures a portion of the surrounding environment that is also captured by a neighboring lens. The video must, therefore, be properly overlapped to avoid double vision. This process is easily accomplished for still pictures or videos with little activity and becomes more challenging with increasing activity as objects can pass over these stitch lines.
As this process can be difficult and time-consuming, many 360-degree cameras now come with proprietary software that automatically stitches the footage together. Such software can produce relatively good results, particularly when there is limited activity or when objects and activity take place further from the camera. The recommended minimum distance that should be maintained between all activity and the camera to ensure the best result is defined in the camera’s user manual (typically 1.5 m).
Finally, if the proprietary software is unable to generate smooth stitch lines, more advanced programs such as Mistika VR (Soluciones Graficas Por Ordenador SL) can be used to improve the final video. These programs allow the stitch lines to be visualized and manually adjusted through edge points so that they do not run directly through moving objects or elements that are important to the video. Valuable written advice and video tutorials made by content creators as well as the developers of these different stitching programs are available on the web; these resources describe the steps needed to improve the stitching of a video in great detail.
Postprocessing
Postprocessing is an important step that allows the user to add external audio tracks, improve lighting, make color adjustments, and remove unwanted objects. Various programs can be used for postprocessing; some require the purchase of a paid license, while others are free. A detailed description of programs used by the study team can be found in the example provided in
.If the camera’s built-in microphone is used, then the video and audio files will be automatically loaded into the program simultaneously. Alternatively, if the audio files are recorded using an external device, stitched together, or purchased, then they must be added to the video file separately. This can be done before or after editing the visual content, as these files are independent of each other.
Lighting and color adjustments can also be made during postprocessing. By following photographic principles, the lighting and colors in the video can be adjusted to convey a specific tone, mood, and atmosphere. The user may also wish to remove certain objects from the recorded footage, such as the camera’s tripod. Alternatively, it may be easier to cover an object rather than edit it out, for example, with a company logo. Generally, video editing programs allow users to cut out unwanted objects or color over them. If this approach is taken, then it should be noted that, depending on the duration of the video, the lighting in the scene may change. Therefore, the process of editing or removing objects may need to be done in multiple steps to ensure that the lighting and colors match. Additionally, with longer-duration videos, there is also a higher likelihood of objects in the surrounding environment interfering with the recording, such as insects landing on the camera lens. These can also be edited out during postprocessing.
In the final postprocessing step, certain settings may have to be altered to ensure that the final video is compatible with the hardware used to display it (
). Within the postprocessing software itself, it may also be necessary to indicate that the content is for VR purposes and, subsequently, whether the content is monoscopic or stereoscopic. The resolution, frames per second, and necessary codecs can also be adjusted at this stage. The playback device may also require filenames to be formatted in a specific way so that the device can recognize 360-degree content. and contain examples of video clips exported using the settings listed in .Attribute | Setting |
Codec | H.264 |
Width (pixels) | 5760 |
Height (pixels) | 2880 |
Frame rate (frames per second) | 24 |
Aspect ratio | Square pixels |
Bitrate | Variable |
Virtual reality mode | Monoscopic (360° × 180°) |
Time interpolation | Free sampling |
Metadata | Enabled |
Hardware for Deployment and Visualization
Choice of Device
Once the video files have been exported, the 360-degree videos can be played on a computer, mobile phone, or VR headset. Videos on a computer or mobile device may require specific software or can be viewed directly through YouTube (YouTube, LLC) or Facebook (Meta Platforms, Inc), assuming that the video was properly uploaded for 360-degree playback on these platforms. On these devices, the virtual environment can be explored by panning around the scene. On mobile devices, the user can explore the scene by pointing the device in the direction they wish to look. This is similar to VR headsets, in which the scene rotates as the user moves their head, immersing the user in the virtual environment. In this way, exploration of the scene using VR is achieved very naturally.
Currently, there are several commercially available HMDs for displaying immersive VR content. They can all be split into 2 main categories: tethered and untethered devices. Tethered devices use a cabled connection to a powerful computer to acquire and display the VR content and often require additional equipment such as base stations and lighthouses for tracking. In contrast, untethered devices are less powerful and may require a wireless connection to transmit and receive content, but they are cable-free. This has the advantage of making the user feel less restricted in their movements and can make the device easier to use as there are no external components to consider. Another aspect that may be relevant for use in a clinical or research setting is the ability of devices to launch in what is known as kiosk mode. This mode allows the device to run a specific application when turned on, thus requiring less hands-on manipulation of the interface.
In addition to ease-of-use considerations, the internal specifications of the device need to be considered. Devices with better specifications often cost more; more expensive devices tend to offer higher image quality, resulting in a more realistic virtual environment. This is important, as it can increase the users’ sense of presence [
]. For this reason, specifications such as the resolution per eye, refresh rate, field of view, and display type all play a role in the user experience and should be considered.In addition to the internal specifications of the device, there are also physical components to consider. To improve user enjoyment and increase immersion within the virtual environment, the HMD should be as unobtrusive as possible [
]. This means that aspects such as weight, counterbalance, padding, and adjustability, to name a few, should be considered when selecting an appropriate device. Wearing a heavy device that does not provide adequate counterbalance (usually located at the back of the user’s head) can result in discomfort on the wearer’s nose. However, while the addition of a counterbalance may alleviate pressure on the user’s nose, it may increase their discomfort when leaning their head against a support. The material and amount of padding, as well as the ability to adjust the tightness of the device, can also play a role in the user’s overall comfort. Additionally, the ability to adjust the device’s internal lenses can allow for a clearer and more focused image to be achieved.A final aspect that should be considered is the auditory output of the device. Some devices have integrated speakers, whereas others do not. While integrated speakers may be suitable for VR gaming, they may be less desirable if the goal is to envelop the user in the virtual environment. In this case, one may choose to use external headphones (preferably noise-canceling), which can be connected to the device, typically through a cabled connection. In general, there are no special requirements to be considered in terms of compatibility between the headphones and the VR device; any commercially available headphones can be used.
Hygiene
Hygiene is of particular importance when using HMDs in a clinical setting. All parts of the device must be disinfected when switching between users, and these considerations must also be accounted for when selecting a device. Devices with a plastic outer covering can be wiped clean with hospital-grade disinfectant wipes. However, some devices, such as the first-generation Oculus Quest (Meta Platforms, Inc), have a fabric covering. To overcome this limitation, a custom-made fabric cover that can be disinfected may be required (
). It should be noted that the lenses of the headset cannot be disinfected with any product that contains alcohol. For such components, a UVC disinfection box (Cleanbox Technology, Inc) can be used; this allows any VR device, including its lenses, to be safely disinfected in 60 seconds ( ). Such a disinfection box can also be used to disinfect over-the-ear headphones that often contain a fabric lining inside their earpieces. Headphones can also be covered with a disposable sanitary earpiece protector to improve hygiene ( ).Arcades and specialized gaming centers are also concerned with hygiene problems but are often less strict with their requirements. These centers often use silicone covers that can be disinfected with alcohol-based wipes or disposable pads that can be placed on the portion of the headset that touches the user’s face. Although these solutions are plausible, sweating can cause silicone covers to become uncomfortable while also dampening disposable pads, causing them to slip out of place. If considering long-term use, the financial aspect of using disposable pads may also need to be considered.
Evaluation of the VR Experience
Conducting research using VR is not limited to content; research surrounding VR also focuses on understanding the user experience, specifically as it pertains to presence and immersion within a virtual environment. There are 3 main validated questionnaires that are used to measure presence. The most cited test is a 32-item presence questionnaire developed by Witmer and Singer [
], followed by the 6-item presence questionnaire known as the Slater-Usoh-Steed questionnaire [ ], and the 13-item iGroup Presence Questionnaire developed by Schubert et al [ , ]. Similarly, there are validated questionnaires concerned with immersion, such as the one developed by Tcha-Tokey et al [ ], which is based on the Immersive Tendencies Questionnaire developed by Witmer and Singer [ ].In addition to measuring presence and immersion, it is important to quantify any negative side effects, known as cybersickness, caused by the virtual world [
]. These may include, but are not limited to, symptoms such as nausea, dizziness, headache, and eye strain [ ]. These symptoms can be assessed by the validated Simulator Sickness Questionnaire [ ]. However, care should be taken to record baseline symptoms in clinical subpopulations [ ].Finally, if the goal is to conduct a scientific study using VR, researchers may wish to consider including eye and head tracking. This could provide information about what parts of the video the user was focusing on and how much they explored their virtual environment. However, eye and head tracking are not supported by all HMDs; these requirements should be considered when selecting an appropriate HMD.
Discussion
Overview
Creating custom 360-degree videos for use in VR-focused research is not an easy task. Unfortunately, due to the relative novelty of the technology and its use in displaying such content, there is a lack of clear resources available, particularly for those unaccustomed to video editing. Therefore, the goal of this tutorial was to provide information pertaining to the creation, playback, and evaluation of 360-degree videos, with several concrete examples provided in
and . This expands on existing tutorials, which provide a narrower and less complete scope of information [ , , ].While the use cases discussed here refer to the clinical setting, with relaxing 360-degree content provided as an example, there are also nonclinical applications for which the current tutorial could be useful [
, , ]. As referred to throughout the tutorial, 1 nonclinical use case could involve the use of VR for guided meditation [ , ]. Another example could include using VR to explore and study architecture or improve the general well-being of individuals without access to nature [ , ]. In both of these cases, 360-degree VR content is used. Therefore, using the information provided in the tutorial, the same principles can be used to create content suited to those purposes, thereby extending the target population beyond researchers and health care professionals. In this way, this tutorial can act as a reference for anyone looking to create their own content.Challenges
Based on our own experiences creating 360-degree VR content, one of the greatest challenges related to creating 360-degree virtual reality content is the length of the recordings. While shorter videos that last less than 5 minutes can be recorded quite easily, there are several equipment- and environment-related issues that must be considered when longer videos are required, especially in environments that are exposed to uncontrollable factors. Not only is there an increased chance of environmental interference, such as insects landing on the lenses or individuals approaching the camera, but there are also technical challenges that must be overcome. This includes overheating and battery life, which must be considered before filming. Additionally, issues associated with file storage capacity and computational power become important when conducting postprocessing on longer videos.
Another challenge to consider when producing 360-degree videos is the content. The team associated with this study focused on recording calm scenes based on the natural environment, as the goal was to achieve a relaxation effect [
, , , - ]. As such, heavy equipment often had to be brought to remote locations in a backpack. It was also difficult to find the correct balance of activity and nonactivity based on the goals of the project. For example, although the footage was intended to relax the viewer, there still needed to be enough activity to ensure that the video did not appear to be a still picture and that there was enough change in the environment to retain the viewer’s interest. Additionally, when filming in a public location, filmmakers must consider the legality of filming individuals who enter the frame of the camera. This problem is particularly relevant when recording 360-degree footage, as there is no way for an individual to avoid being filmed once they enter the camera’s field of view. One option to increase the amount of activity in a scene without running afoul of legality issues is to hire actors to create an engaging scene that is appropriate for the purposes of the content.Future Research
To address some of these concerns, future studies using 360-degree videos should conduct a prestudy that examines the suitability of their content for their intended purposes. In this way, the reaction of the target population can be examined at an early stage, before the investment of time and resources that are required to make all of the content. However, the suitability of the content will, to some extent, always depend on the individual. Another aspect that could be further investigated is the inclusion of different sounds. Specifically, the overall influence that the choice of sound has on the feeling of immersion within the VR environment could be examined. In this way, the user’s experience could potentially be improved.
Conclusions
This tutorial provides users with a pipeline for the creation of customizable 360-degree videos based on first-hand experience. As the field of VR research and use continues to grow and the technology becomes more accessible to the general public, this paper will hopefully guide users through the process of creating content that is suited to their individual needs while avoiding common pitfalls associated with content creation. In doing so, this tutorial fills a gap in the literature and expands upon previously published tutorials focused on the creation of 360-degree videos by explaining 5 key considerations associated with the creation, deployment, and evaluation of 360-degree VR content.
Acknowledgments
The authors would like to acknowledge Carina Röthlisberger for her assistance in recording and troubleshooting the first videos. The authors would also like to thank Listening Earth for sharing their expertise regarding sound recordings and allowing us to use their recordings in our research.
Data Availability
Data sharing is not applicable to this article as no data sets were generated or analyzed for this tutorial paper.
Authors' Contributions
ACN undertook the conceptualization, methodology, writing of the original draft, review, and editing of this manuscript. MMJ, SMJ, RMM, and TN took part in the conceptualization, supervision, writing of the original draft, review, and editing of this paper.
Conflicts of Interest
None declared.
Document outlining the details of the pipeline that the study team implemented while recording their nature videos.
DOCX File , 179 KBA 2-minute, 360-degree-enabled video. The 360-degree scene can be explored by panning around the video.
MP4 File (MP4 Video), 181075 KBA 50-second video clip of a screen capture of a 360-degree-enabled video. Exploration of the scene was done using the cursor. Note that exploration within a HMD would appear smooth. HMD: head-mounted display.
MP4 File (MP4 Video), 17702 KBReferences
- Jung Y. Virtual Reality Simulation for Disaster Preparedness Training in Hospitals: Integrated Review. J Med Internet Res. Jan 28, 2022;24(1):e30600. [FREE Full text] [CrossRef] [Medline]
- Patel D, Hawkins J, Chehab LZ, Martin-Tuite P, Feler J, Tan A, et al. Developing virtual reality trauma training experiences using 360-degree video: tutorial. J Med Internet Res. 2020;22(12):e22420. [FREE Full text] [CrossRef] [Medline]
- Ruthenbeck GS, Reynolds KJ. Virtual reality for medical training: the state-of-the-art. Journal of Simulation. 2015;9(1):16-26. [CrossRef]
- O'Sullivan B, Alam F, Matava C. Creating low-cost 360-degree virtual reality videos for hospitals: a technical paper on the dos and don'ts. J Med Internet Res. 2018;20(7):e239. [FREE Full text] [CrossRef] [Medline]
- Bernaerts S, Bonroy B, Daems J, Sels R, Struyf D, Gies I, et al. Virtual reality for distraction and relaxation in a pediatric hospital setting: an interventional study with a mixed-methods design. Front Digit Health. 2022;4:866119. [FREE Full text] [CrossRef] [Medline]
- Indovina P, Barone D, Gallo L, Chirico A, De Pietro G, Giordano A. Virtual reality as a distraction intervention to relieve pain and distress during medical procedures: a comprehensive literature review. Clin J Pain. 2018;34(9):858-877. [CrossRef] [Medline]
- Malloy KM, Milling LS. The effectiveness of virtual reality distraction for pain reduction: a systematic review. Clin Psychol Rev. 2010;30(8):1011-1018. [FREE Full text] [CrossRef] [Medline]
- Tashjian VC, Mosadeghi S, Howard AR, Lopez M, Dupuy T, Reid M, et al. Virtual reality for management of pain in hospitalized patients: results of a controlled trial. JMIR Ment Health. 2017;4(1):e9. [FREE Full text] [CrossRef] [Medline]
- Hill JE, Twamley J, Breed H, Kenyon R, Casey R, Zhang J, et al. Scoping review of the use of virtual reality in intensive care units. Nurs Crit Care. 2021;27(6):756-771. [CrossRef] [Medline]
- Birckhead B, Khalil C, Liu X, Conovitz S, Rizzo A, Danovitch I, et al. Recommendations for methodology of virtual reality clinical trials in health care by an international working group: iterative study. JMIR Ment Health. 2019;6(1):e11973. [FREE Full text] [CrossRef] [Medline]
- Pardini S, Gabrielli S, Dianti M, Novara C, Zucco GM, Mich O, et al. The role of personalization in the user experience, preferences and engagement with virtual reality environments for relaxation. Int J Environ Res Public Health. 2022;19(12):7237. [FREE Full text] [CrossRef] [Medline]
- Felemban OM, Alshamrani RM, Aljeddawi DH, Bagher SM. Effect of virtual reality distraction on pain and anxiety during infiltration anesthesia in pediatric patients: a randomized clinical trial. BMC Oral Health. 2021;21(1):321. [FREE Full text] [CrossRef] [Medline]
- Gerber SM, Jeitziner MM, Sänger SD, Knobel SEJ, Marchal-Crespo L, Müri RM, et al. Comparing the relaxing effects of different virtual reality environments in the intensive care unit: observational study. JMIR Perioper Med. 2019;2(2):e15579. [FREE Full text] [CrossRef] [Medline]
- Naef AC, Jeitziner MM, Knobel SEJ, Exl MT, Müri RM, Jakob SM, et al. Investigating the role of auditory and visual sensory inputs for inducing relaxation during virtual reality stimulation. Sci Rep. 2022;12(1):17073. [FREE Full text] [CrossRef] [Medline]
- Villani D, Riva F, Riva G. New technologies for relaxation: the role of presence. Int J Stress Manag. 2007;14(3):260-274. [CrossRef]
- Yildirim C, O'Grady T. The efficacy of a virtual reality-based mindfulness intervention. IEEE; Presented at: 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR); 14-18 Dec. 2020, 2020;158-165; Utrecht, Netherlands. URL: https://ieeexplore.ieee.org/xpl/conhome/9318989/proceeding [CrossRef]
- Lee SY, Kang J. Effect of virtual reality meditation on sleep quality of intensive care unit patients: a randomised controlled trial. Intensive Crit Care Nurs. 2020;59:102849. [FREE Full text] [CrossRef] [Medline]
- Navarro-Haro MV, López-Del-Hoyo Y, Campos D, Linehan MM, Hoffman HG, García-Palacios A, et al. Meditation experts try virtual reality mindfulness: a pilot study evaluation of the feasibility and acceptability of virtual reality to facilitate mindfulness practice in people attending a mindfulness conference. PLoS One. 2017;12(11):e0187777. [FREE Full text] [CrossRef] [Medline]
- Laumann K, Gärling T, Stormark KM. Selective attention and heart rate responses to natural and urban environments. Journal of Environmental Psychology. Jun 2003;23(2):125-134. [FREE Full text] [CrossRef]
- Ulrich RS, Simons RF, Losito BD, Fiorito E, Miles MA, Zelson M. Stress recovery during exposure to natural and urban environments. J Environ Psychol. 1991;11(3):201-230. [CrossRef]
- Pretty J, Peacock J, Sellens M, Griffin M. The mental and physical health outcomes of green exercise. Int J Environ Health Res. 2005;15(5):319-337. [CrossRef] [Medline]
- Hoffman HG, Chambers GT, Meyer WJ, Arceneaux LL, Russell WJ, Seibel EJ, et al. Virtual reality as an adjunctive non-pharmacologic analgesic for acute burn pain during medical procedures. Ann Behav Med. 2011;41(2):183-191. [FREE Full text] [CrossRef] [Medline]
- Pourmand A, Davis S, Marchak A, Whiteside T, Sikka N. Virtual reality as a clinical tool for pain management. Curr Pain Headache Rep. 2018;22(8):53. [CrossRef] [Medline]
- Naef AC, Erne K, Exl MT, Nef T, Jeitziner MM. Visual and auditory stimulation for patients in the intensive care unit: a mixed-method study. Intensive Crit Care Nurs. 2022;73:103306. [FREE Full text] [CrossRef] [Medline]
- Gidlow CJ, Jones MV, Hurst G, Masterson D, Clark-Carter D, Tarvainen MP, et al. Where to put your best foot forward: psycho-physiological responses to walking in natural and urban environments. J Environ Psychol. 2016;45:22-29. [FREE Full text] [CrossRef]
- Malkovsky E, Merrifield C, Goldberg Y, Danckert J. Exploring the relationship between boredom and sustained attention. Exp Brain Res. 2012;221(1):59-67. [CrossRef] [Medline]
- Tian N, Lopes P, Boulic R. A review of cybersickness in head-mounted displays: raising attention to individual susceptibility. Virtual Reality. Mar 10, 2022;26(4):1409-1441. [FREE Full text] [CrossRef]
- Annerstedt M, Jönsson P, Wallergård M, Johansson G, Karlson B, Grahn P, et al. Inducing physiological stress recovery with sounds of nature in a virtual reality forest--results from a pilot study. Physiol Behav. Jun 13, 2013;118:240-250. [FREE Full text] [CrossRef] [Medline]
- Greenebaum K, Barzel R. Audio Anecdotes II: Tools, Tips, and Techniques for Digital Audio. New York. AK Peters/CRC Press; 2004.
- Rea P, Irving DK. Producing and Directing the Short Film and Video. Milton Park, Abingdon-on-Thames, Oxfordshire, England, UK. Routledge; 2015.
- Slater M. A note on presence terminology. Presence connect. 2003;3(3):1-5. [FREE Full text]
- Witmer BG, Singer MJ. Measuring presence in virtual environments: a presence questionnaire. Presence. 1998;7(3):225-240. [CrossRef]
- Usoh M, Catena E, Arman S, Slater M. Using presence questionnaires in reality. Presence: Teleoperators and Virtual Environments. 2000;9(5):497-503. [CrossRef]
- Schubert T, Friedmann F, Regenbrecht H. The experience of presence: factor analytic insights. Presence: Teleoperators and Virtual Environments. 2001;10(3):266-281. [CrossRef]
- Schubert TW. The sense of presence in virtual environments: a three-component scale measuring spatial presence, involvement, and realness. Z für Medienpsychologie. 2003;15(2):69-71. [CrossRef]
- Tcha-Tokey K, Christmann O, Loup-Escande E, Richir S. Proposition and validation of a questionnaire to measure the user experience in immersive virtual environments. IJVR. 2016;16(1):33-48. [CrossRef]
- Brown P, Powell W. Pre-Exposure Cybersickness Assessment Within a Chronic Pain Population in Virtual Reality. Front. Virtual Real. Jun 4, 2021;2:1-10. [FREE Full text] [CrossRef]
- Kennedy RS, Lane NE, Berbaum KS, Lilienthal MG. Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. The Int J Aviat Psychol. 1993;3(3):203-220. [CrossRef]
- Gupta S, Wilcocks K, Matava C, Wiegelmann J, Kaustov L, Alam F. Creating a successful virtual reality-based medical simulation environment: tutorial. JMIR Med Educ. 2023;9:e41090. [FREE Full text] [CrossRef] [Medline]
- Seabrook E, Kelly R, Foley F, Theiler S, Thomas N, Wadley G, et al. Understanding how virtual reality can support mindfulness practice: mixed methods study. J Med Internet Res. 2020;22(3):e16106. [FREE Full text] [CrossRef] [Medline]
- Browning MHEM, Mimnaugh KJ, van Riper CJ, Laurent HK, LaValle SM. Can Simulated Nature Support Mental Health? Comparing Short, Single-Doses of 360-Degree Nature Videos in Virtual Reality With the Outdoors. Front Psychol. 2020;10:2667. [FREE Full text] [CrossRef] [Medline]
- Mouratidis K, Hassan R. Contemporary versus traditional styles in architecture and public space: A virtual reality study with 360-degree videos. Cities. Feb 2020;97:102499. [FREE Full text] [CrossRef]
- Riches S, Azevedo L, Bird L, Pisani S, Valmaggia L. Virtual reality relaxation for the general population: a systematic review. Soc Psychiatry Psychiatr Epidemiol. 2021;56(10):1707-1727. [FREE Full text] [CrossRef] [Medline]
- Hedblom M, Gunnarsson B, Iravani B, Knez I, Schaefer M, Thorsson P, et al. Reduction of physiological stress by urban green space in a multisensory virtual experiment. Sci Rep. 2019;9(1):10113. [FREE Full text] [CrossRef] [Medline]
- Li H, Zhang X, Wang H, Yang Z, Liu H, Cao Y, et al. Access to nature virtual reality: a mini-review. Front Psychol. 2021;12:725288. [FREE Full text] [CrossRef] [Medline]
Abbreviations
HMD: head-mounted display |
VR: virtual reality |
VR-CORE: Virtual Reality Clinical Outcomes Research Experts |
Edited by T Leung; submitted 24.08.22; peer-reviewed by Y Jung, I Danovitch, R Ciorap; comments to author 20.03.23; revised version received 04.04.23; accepted 21.08.23; published 14.09.23.
Copyright©Aileen C Naef, Marie-Madlen Jeitziner, Stephan M Jakob, René M Müri, Tobias Nef. Originally published in JMIR Medical Education (https://mededu.jmir.org), 14.09.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.