14:30
18:00

Rajkumar Darbar PhD thesis

Unlike desktop computing environments, augmented reality (AR) blurs the boundaries between the physical and digital worlds by superimposing virtual information on the real environment. In particular, it is possible to create an immersive augmented reality experience by wearing a dedicated headset on the head, or by using video projectors. This thesis explores the interaction challenges associated with these two types of augmented reality displays. Augmented reality headsets are constantly improving in terms of display (i.e., field of view), tracking, interaction techniques, and portability. The currently available input techniques (such as hand input, head/look and voice) in AR headsets are relatively easy to use for some tasks (e.g., grasping and moving an object). However, they lack precision and are not suitable for prolonged use. As a result, tasks that require precision become difficult to perform. In our research, we consider one such task - text selection that requires character-level precision. On the other hand, projection-based AR, generally referred to as spatial augmented reality (SAR), directly augments physical objects with digital content using projectors. In SAR, the digital augmentation is usually predefined and the user often acts as a passive viewer. To bring interactivity to RAS, users need contextual graphical widgets (pop-ups, menus, labels, interactive tools, etc.) Unfortunately, integrating these widgets into the RAS scene is difficult. In this thesis, we explored new interaction techniques to address the above challenges. Our two main contributions are as follows First, we studied the use of a smartphone as an interactive controller to select text displayed through a headset. We developed four one-handed text selection techniques for AR headsets using a smartphone: continuous touch (where the smartphone's touchscreen acts as a touchpad), discrete touch (where the smartphone's touchscreen is used to move the cursor character-by-character, word-by-word, and line-by-line), spatial motion (the smartphone is moved in front of the user), and ray projection (the smartphone is used as a laser pointer). We also compared them in a user study. Second, we extended the physical space into SAR environments by providing 2D graphical widgets in the air using projection on a drone-mounted projection panel. Users can control the position of the drone and dynamically interact with the projected information using a joystick. We present three possible ways to integrate widgets using a drone in RSA: display annotations in the air, provide interactive tools, support different viewpoints. We also describe the implementation details of our approach. These explorations aim at extending the interaction space in immersive augmented reality applications.

Inria : Salle Ada Lovelace