With the rapid development of virtual reality and augmented reality technologies, the demand for personalized virtual avatars has become increasingly urgent. Recently, researchers have introduced a new technology called URAvatar (Universal Relightable Gaussian Coding Avatar), which allows for the effortless generation of high-fidelity virtual avatars through mobile phone scanning.

This innovative achievement not only enhances the visual appearance of virtual avatars but also enables users to drive and adjust their avatars in real-time under different lighting conditions.

The working principle of URAvatar is based on a complex light transport model, which differs from previous methods that estimated reflection parameters through inverse rendering. URAvatar employs a learnable radiance transport model, allowing for efficient real-time rendering. The challenge of this technology lies in how to effectively transfer lighting across different identities. The research team has constructed a universal relightable avatar model by training hundreds of high-quality multi-view facial scans and combining them with controllable point light sources.

QQ20241111-090741.png

In practical applications, users only need to perform a simple scan in a natural environment using their phone, and the system can reconstruct the head's pose, geometry, and reflective textures, ultimately generating a personalized relightable avatar. After fine-tuning, the user's avatar can achieve natural performance and dynamic control under different environmental lighting conditions, ensuring consistency across various lighting scenarios.

Additionally, the URAvatar technology supports independent control of the avatar's gaze direction and neck movements, enhancing the expressiveness and interactivity of virtual avatars. The release of this technology will bring new opportunities to fields such as gaming and social platforms, allowing users to engage in the virtual world in a more vivid and personalized manner.

Key Points:

🌟 URAvatar generates personalized virtual avatars through mobile phone scanning, enhancing virtual performance.

💡 The technology uses a learnable radiance transport model for real-time rendering and lighting transfer.

🎮 Users can independently control the avatar's gaze direction and neck movements, improving virtual interaction experiences.