Description:
- Realistic avatar creation for movies, video games, teleconferencing, and social media
Abstract
USC researchers have developed Deep Iterative Face Fitting (DIFF), an end-to-end neural network that can create high-quality face avatars from a single image. This technique can reconstruct a professional-grade face model down to pore-level facial geometries in 4K UHD resolution by extracting features in both the image space and in the UV space. DIFF is adept at managing even extreme poses, expression, and illumination, and it significantly outperforms previous neural network-based approaches.
Benefits
- Generates high-resolution, professional-quality face assets from single view images
- Faster than similar neural-based techniques
- Performs well even with extreme poses and lighting
Market Application
High quality, photorealistic face avatars are in demand for movies, video games, teleconferencing, and social media. However, producing these avatars is a costly and time-consuming process involving manual adjustments and the use of special devices. Consequently, there is a need for a faster, fully automatic system that can produce professional-grade face avatars efficiently and accessibly. The development of neural-learning-based techniques presents an opportunity to meet this demand and provide animation-ready, production-grade face capturing solutions that are efficient, end-to-end, and adaptable to different capturing rigs.
Stage of Development
- Tested against similar neural network-based approaches
- Available for licensing