Virtual Reality

Overview

The age of social media and immersive technologies has created a growing need for processing detailed visual representations of ourselves in virtual reality (VR). A realistic simulation of our presence in virtual worlds is unthinkable without a compelling and directable 3D digitization of ourselves. With the wide availability of mobile cameras and the emergence of 3D sensors, Professor Li and his team develop methods that allow computers to digitize, process, and understand dynamic objects from the physical world without the use of professional equipment or user input. His current research focuses on unobtrusive 3D scanning and performance capture of humans in everyday settings with applications in digital content creation and VR. The objectives of his proposed research is to develop automated digitization frameworks that can create high fidelity virtual avatars using consumer sensors and a deployable performance-driven facial animation system to enable, quite literally, face-to-face communication in cyberspace.

The industry standard of creating life-like digital characters still relies on a combination of skilled artists and expensive 3D scanning hardware. While recent advances in geometry processing have significantly pushed the capabilities of modeling anatomical human parts such as faces and bodies in controlled studio environments, highly convoluted structures such as hairstyles and wearable accessories (glasses, hats, etc.) are still difficult to compute without manual intervention. The task of modeling human parts in the wild is further challenged by occlusions such as hair and clothing, partial visibility, and bad lighting conditions. We aim to develop methods that are operable by untrained users and can automatically generate photorealistic digital models of human faces and hair using accessible sensors. Nowadays, complex facial animations of virtual avatars can be directly driven by a person’s facial performance. The latest techniques are consumer friendly, real-time, markerless, calibration-free, and only require a single video camera as input. However, a true immersive experience requires a user to wear a VR head mounted display (HMD) which generally occludes a large part of the upper face region. Our goal is to enable facial performance capture capabilities with VR headsets and transfer true-to-life facial expressions from the users to the digital avatars. Inspired by the recent progress in deep learning techniques for 2D images, we believe that an end-to-end approach for 3D modeling, animation, and rendering is possible using deep neural network-based synthesis and inference techniques.

Selected Publications

AVATAR DIGITIZATION FROM A SINGLE IMAGE FOR REAL-TIME RENDERING. Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, Hao Li. ACM Transactions on Graphics, Proceedings of the 10th ACM SIGGRAPH Conference and Exhibition in Asia 2017, 11/2017 – SIGGRAPH ASIA 2017

PHOTOREALISTIC FACIAL TEXTURE INFERENCE USING DEEP NEURAL NETWORKS. Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li. Proceedings of the 30th IEEE International Conference on Computer Vision and Pattern Recognition 2017, 07/2017 – CVPR 2017 (Spotlight Presentation)

HIGH-FIDELITY FACIAL AND SPEECH ANIMATION FOR VR HMDS. Kyle Olszewski, Joseph J. Lim, Shunsuke Saito, Hao Li. ACM Transactions on Graphics, Proceedings of the 9th ACM SIGGRAPH Conference and Exhibition in Asia 2016, 12/2016 – SIGGRAPH ASIA 2016

FACIAL PERFORMANCE SENSING HEAD-MOUNTED DISPLAY
Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, Chongyang Ma. ACM Transactions on Graphics, Proceedings of the 42nd ACM SIGGRAPH Conference and Exhibition 2015, 08/2015 – SIGGRAPH 2015

SINGLE-VIEW HAIR MODELING USING A HAIRSTYLE DATABASE
Liwen Hu, Chongyang Ma, Linjie Luo, Hao Li. ACM Transactions on Graphics, Proceedings of the 42nd ACM SIGGRAPH Conference and Exhibition 2015, 08/2015 – SIGGRAPH 2015

Hao Li

Assistant Professor of Computer Science Dept.
USC

Hao Li

Research interests include human digitization, facial animation, hair modeling, 3D scanning, performance capture, non-rigid registration, data-driven techniques, deep learning, geometry processing, VR and AR.

Home Page

IMSC is a research center that focuses on data-driven solutions for real-world applications by applying multidisciplinary research in the area of data science.