How Your Avatar Shapes Your Virtual Reality Experience

Written by netizenship | Published 2025/04/28
Tech Story Tags: virtual-reality | vr-avatars | first-person-perspective-vr | third-person-perspective-vr | spatial-awareness-vr | vr-navigation | vr-self-embodiment | vr-design-guidelines

TLDR This section reviews related work on VR avatars, exploring the role of realism and perspective (first-person vs third-person) in enhancing user embodiment and spatial awareness. It also covers point cloud visualization techniques, addressing challenges in rendering and real-time data processing.via the TL;DR App

Authors:

(1) Rafael Kuffner dos Anjos;

(2) Joao Madeiras Pereira.

Table of Links

Abstract and 1 Introduction

2 Related Work and 2.1 Virtual avatars

2.2 Point cloud visualization

3 Test Design and 3.1 Setup

3.2 User Representations

3.3 Methodology

3.4 Virtual Environment and 3.5 Tasks Description

3.6 Questionnaires and 3.7 Participants

4 Results and Discussion, and 4.1 User preferences

4.2 Task performance

4.3 Discussion

5 Conclusions and References

2 RELATED WORK

In this section we present and discuss the related work regarding user representation. First, we introduce and define virtual avatars and concepts associated. Following, a brief review of point cloud visualization techniques is discussed, and their use and viability in Virtual Reality setups

2.1 Virtual avatars

An important part of the experience is how users are represented on the virtual scene. As opposed to CAVE-like systems, the use of Head-Mounted Display technology occludes users’ real self, compromising the overall virtual-reality session. A way of overcoming this problem is by using a fully-embodied representation of the user within the virtual environment [31]. A virtual body can supply them with a reference of recognizable size and a connectedness to the virtual environment [13, 26], even though studies indicate that the use of a virtual body setups may still cause distance underestimation [24]. The feeling of presence is related to the concept of proprioception, which is the ability to sense stimuli arising within the body regarding position, motion, and equilibrium. The sense of embodiment into an avatar is constitutive of the sense of presence in virtual reality (VR) and affects the way one interacts with virtual elements [16]. This concept is subdivided in three components: the sense of agency, i.e. feeling of motor control over the virtual body; (ii) the sense of body ownership, i.e. feeling that the virtual body is one’s own body; and (iii) self-location, i.e. the experienced location of the self.

The level of realism of the avatar also plays an important part on the VR experience and how it relates to the sense of embodiment of an user. A common problem on this matter is the uncanny valley [21], which states that the acceptability of an artificial character will not increase linearly in relation to its likeness to human form. Instead, after an initial rise in acceptability there will be a pronounced decrease when the character is similar, but not identical to human form. Additionally, Piwek et al.[22] state that the effect of realism in the deepest part of the valley become more acceptable when it is animated. Works by Lugrin et al. [19, 20] also state that the uncanny valley also affect the feeling of presence and embodiment of avatars in first person perspective (1PP) when viewed through a head-mounted display.

Another possibility when using a self-embodied avatar of the user is changing the perspective which the avatar is viewed. This approach is normally used on video-games to increase user’s spatial awareness when navigating and interacting on the scene. The sense of body ownership is also possible when using artificial bodies (in real scenarios) and avatars (in virtual environment scenarios) in immersive setups. A classical extra-corporeal experience is known by Rubber Hand Illusion (RHI) [5]. In this illusion, a subject is made to believe a rubber hand is in fact his own hand, which is hidden from view, to the point of pulling his own hand away if the rubber hand is attacked. This illusion has similar effects in Virtual Reality setups, which is called Virtual Hand illusion, and can be induced by visuotactile [30] and visuomotor synchrony [29, 34].

The Rubber-hand Illusion has also proven to work with full-body embodiment. Ehrrson et al. [9] proves this by streaming a the image of the body of the participant with an image of his body in a third-person perspective using an stereoscopic camera. Leggenhager et al. [18] confirms this by using a similar setup to prove that when using a Third-person perspective behind users bodies, users felt to be located there they saw the virtual body to be. In VR, the usage of orthogonal third person viewpoints has been explored and was for instance recommended to help setting the posture of a motion controlled virtual body [6].

Further work by Salamin et al. use an augmented-reality setup with a displaced camera and an HMD to show that the best perspective depends on the performed action: first-person perspective (1PP) for more precise object manipulations and third-person perspective (3PP) for moving actions. Work by the same work also shown that the users preferred the use of the 3PP in comparison with 1PP and needed less training in a ball catching scenario [28]. Further work by Kosch et al. [17], find that the preferred viewpoint in a 3PP is behind user’s head, providing a real life third person experience. Distance underestimation is also present when the avatar is seen on a third person perspective [28].

2.2 Point cloud visualization

The main challenge when dealing with point-cloud visualization is the unstructured nature of the data and its sparsity. Rendering point clouds with point primitives has several drawbacks when compared to other techniques (e.g., background/foreground confusion, loss of definition on close-ups), when confronting a low resolution scenario [1]. Katz et. al [14] solved the problem of background foreground confusion by estimating the direct visibility of sets of points. However, in a mixed visualization scenario such as the one applied on this work, confusion still exists between the rendered points and for the body and the mesh-based environment.

Surface reconstruction is the standard approach to visualize point cloud data [10] with several successful techniques estimating the original surfaces from point sets [12, 15]. A single depth stream can be easily remeshed using delaunay triangulation [11], or multistream fusion can be performed such as shown in the work of Dou et al. [8]. While single stream triangulation can be performed in real time, multi stream fusion can be a very consuming task that requires specialized hardware for processing data.

Screen-aligned splats [33] have been proposed as a more efficient alternative to polygonal mesh rendering [27], and are easily implemented in an interactive system, being the to go approach for visualization of real time data. Authors claim to have a comparable visual appearance to closed surfaces for visualization uses [4]. Although surface aligned splats [23, 3, 35] which create a better approximation of the surface, normal estimation in real-time can be also a costly operation. Splatting has been used in the past for point cloud visualization of environments [2], but not in a real time reconstruction of the users body.

This paper is available on arxiv under CC BY 4.0 DEED license.


Written by netizenship | Netizenship is internet citizenship. We publish academic research on digital rights of online community members.
Published by HackerNoon on 2025/04/28