The needs for an accurate hand motion tracking have been increasing in HCI and applications requiring an understanding of user motions.[Read More]
Our ability to generate models from images can be very useful to other projects at IMSC.
The Data-Driven Facial Modeling and Animation project explores the use of facial models made directly from motion capture data to address the goals of realism and automation.[Read More]
This research contributes to the general IMSC vision by allowing the next wave of multimedia data, 3D geometry, to be transmitted efficiently on the web. Interactions with the rest of IMSC are mainly centered on our graphics expertise, for both hardware and software.[Read More]
The Expression Synthesis Project (ESP) aims to create a driving interface that will enable nonexperts to create expressive musical performances. Anecdotal evidence amongst musicians suggests that generating an expressive performance is very much like driving a car.[Read More]
The Facial Expression project seeks to automatically record and analyze human facial expressions and synthesize corresponding facial animation. Analysis and synthesis of facial expression are central to the goal of responsive and empathetic human-computer interfaces.[Read More]
Hair is an indispensible ingredient in realizing virtual human character. In IMSC, we have developed a set of techniques for human hair modeling, rendering, and animation.
Prior knowledge of the canonical structure of the human face can aid in various automated faceprocessing tasks. In this project we have developed a statistical appearance model for faces and are exploring its application to several problems: stylized face rendering, caricature, and reconstruction of occluded face images.[Read More]