Study assesses the efficacy of hands-free text selection systems for VR headsets

35


(a) A screenshot of the interface of the experiment setting which was used for all situations. An ‘instruction panel’ is situated on the left facet, barely tilted in direction of the person. The ‘interaction panel’ is situated within the middle, barely tilted in direction of the person. Its design is predicated on suggestions from prior associated work. (b) An image of a participant doing the experiment with the HTC VIVE Professional Eye headset. Credit: Meng, Xu and Liang.

Digital actuality (VR) and augmented actuality (AR) head-mounted shows enable customers to expertise digital content material in additional immersive and fascinating methods. To maintain the customers as immersed within the content material as potential, pc scientists have been making an attempt to develop navigation and textual content choice interfaces that don’t require using their fingers.

As a substitute of urgent buttons on a handbook controller, these interfaces would enable users to pick texts or carry out instructions just by transferring their heads or blinking their eyes. Regardless of the promise of those approaches, at present most head-mounted shows nonetheless closely depend on handheld controllers or hand and finger gestures.

Researchers at Xi’an Jiaotong-Liverpool University and Birmingham Metropolis University have just lately carried out a research geared toward investigating completely different hands-free textual content choice approaches for VR and AR headsets. Their findings, printed in a paper pre-published on arXiv, spotlight the advantages of a few of these approaches, notably those who allow interactions by means of eye blinks.

“My group has been engaged in improving text entry for VR/AR over the past six years,” Hai-Ning Liang, one of many researchers who carried out the research, instructed TechXplore. “Text entry is an important element in the ecosystem of text entry and editing.”

The latest research by Liang and his colleagues builds on a few of their earlier analysis specializing in hands-free textual content entry methods for VR. Of their earlier research, the workforce discovered that hands-free methods may simplify person interactions with VR techniques, making coming into textual content extra intuitive.

“The main goal of our work is to explore what types of features are suitable for hands-free text selection in VR,” Liang defined. “In this new study, we investigated the potential of hands-free text selection approaches in a controlled lab experiment with 24 participants using a within-subjects experiment design (i.e., where the participants experienced all test conditions).”

Of their experiments, Liang and his colleagues requested members to check completely different textual content choice strategies whereas performing a specific task. This activity mimicked what the customers may encounter in real-world settings whereas utilizing VR and was divided into three situations that diverse based mostly on the size of the textual content introduced to customers (i.e., quick: single phrase; medium: 2–3 traces of textual content; lengthy: 6–8 traces of textual content).

Study assesses the efficacy of hands-free text selection systems for VR headsets
The three hands-free textual content choice methods explored on this analysis, grouped by three choice mechanisms: (a) Dwell, (b) Eye blink, and (c) Voice. Credit: Meng, Xu and Liang.

The members had been requested to make use of completely different hands-free textual content choice strategies whereas in a VR studying setting that the workforce had particularly created for the experiment. After they accomplished these exams, the members had been requested to offer suggestions about their experiences.

“Text selection, like many other interactions in VR, requires a pointing mechanism for the identification of the objects to be selected prior to interacting with them, and then another mechanism to indicate the selection,” Liang mentioned. “In this study, we selected head-based pointing as our pointing mechanism, which means the cursor will follow the user’s head movements.”

Liang and his colleagues determined to particularly assess the potential of three completely different textual content choice strategies, known as “Dwell,” “Eye blinks” and “Voice.” Dwell requires customers to hover the pointer on the world the place the textual content they wish to choose is situated for a particular time (e.g., 1 second).

When utilizing the Eye blinks for choice, customers had been requested to deliberately blink their eyes to pick a particular textual content. Their system acknowledges these intentional eye blinks as a result of they’re usually longer than pure ones (roughly 400ms as an alternative of 100–200ms).

Lastly, the Voice method required customers to provide a sound over 60db. Of their experiments, the researchers requested their members to make a buzzing sound once they wished to pick a textual content fragment.

“These selection mechanisms, including their parameters, were chosen based on findings from the literature and a series of pilot tests we did,” Liang defined. “The findings gathered in our experiment once again confirmed that hands-free approaches could be suitable for text selection in VR. In addition, we showed that eye blinks are a very efficient and useful selection mechanism for hands-free interaction.”

The latest work by Liang and his colleagues highlights the massive potential of hands-free textual content choice methods for making VR techniques extra intuitive and handy to make use of. Sooner or later, their findings may encourage extra analysis groups to develop and consider blink-based methods for textual content choice and different sorts of interactions.

“Our plan for future research in this area will be to focus on making text selection even more efficient and usable and integrating it into the ecosystem for text editing and document creation in VR/AR,” Liang added. “We will also be designing text selection methods that can be used by a variety of impaired users and exploring other approaches, including eye gaze for cursor movement instead of head movements.”


Do you use predictive text? Chances are it’s not saving you time, and could even be slowing you down


Extra data:
Xuanru Meng, Wenge Xu, Hai-Ning Liang, An exploration of hands-free textual content choice for digital actuality head-mounted shows. arXiv:2209.06825v1 [cs.HC], arxiv.org/abs/2209.06825

Xueshi Lu et al, Exploration of Palms-free Textual content Entry Methods For Digital Actuality, 2020 IEEE Worldwide Symposium on Combined and Augmented Actuality (ISMAR) (2020). DOI: 10.1109/ISMAR50242.2020.00061

Xueshi Lu et al, iText: Palms-free Textual content Entry on an Imaginary Keyboard for Augmented Actuality Methods, The thirty fourth Annual ACM Symposium on Person Interface Software program and Know-how (2021). DOI: 10.1145/3472749.3474788

© 2022 Science X Community

Quotation:
Examine assesses the efficacy of hands-free textual content choice techniques for VR headsets (2022, October 12)
retrieved 12 October 2022
from https://techxplore.com/information/2022-10-efficacy-hands-free-text-vr-headsets.html

This doc is topic to copyright. Aside from any truthful dealing for the aim of personal research or analysis, no
half could also be reproduced with out the written permission. The content material is offered for data functions solely.



Click Here To Join Our Telegram Channel



Source link

In case you have any issues or complaints concerning this text, please tell us and the article shall be eliminated quickly. 

Raise A Concern