Rapid Photorealistic Blendshape Modeling from RGB-D Sensors (bibtex)
by Dan Casas, Andrew Feng, Oleg Alexander, Graham Fyffe, Paul Debevec, Ryosuke Ichikari, Hao Li, Kyle Olszewski, Evan Suma, Ari Shapiro
Abstract:
Creating and animating realistic 3D human faces is an important element of virtual reality, video games, and other areas that involve interactive 3D graphics. In this paper, we propose a system to generate photorealistic 3D blendshape-based face models automatically using only a single consumer RGB-D sensor. The capture and processing requires no artistic expertise to operate, takes 15 seconds to capture and generate a single facial expression, and approximately 1 minute of processing time per expression to transform it into a blendshape model. Our main contributions include a complete end-to-end pipeline for capturing and generating photorealistic blendshape models automatically and a registration method that solves dense correspondences between two face scans by utilizing facial landmarks detection and optical flows. We demonstrate the effectiveness of the proposed method by capturing different human subjects with a variety of sensors and puppeteering their 3D faces with real-time facial performance retargeting. The rapid nature of our method allows for just-in-time construction of a digital face. To that end, we also integrated our pipeline with a virtual reality facial performance capture system that allows dynamic embodiment of the generated faces despite partial occlusion of the user’s real face by the head-mounted display.
Reference:
Rapid Photorealistic Blendshape Modeling from RGB-D Sensors (Dan Casas, Andrew Feng, Oleg Alexander, Graham Fyffe, Paul Debevec, Ryosuke Ichikari, Hao Li, Kyle Olszewski, Evan Suma, Ari Shapiro), In Proceedings of the 29th International Conference on Computer Animation and Social Agents, ACM Press, 2016.
Bibtex Entry:
@inproceedings{casas_rapid_2016,
	address = {Geneva, Switzerland},
	title = {Rapid {Photorealistic} {Blendshape} {Modeling} from {RGB}-{D} {Sensors}},
	isbn = {978-1-4503-4745-7},
	url = {http://dl.acm.org/citation.cfm?doid=2915926.2915936},
	doi = {10.1145/2915926.2915936},
	abstract = {Creating and animating realistic 3D human faces is an important element of virtual reality, video games, and other areas that involve interactive 3D graphics. In this paper, we propose a system to generate photorealistic 3D blendshape-based face models automatically using only a single consumer RGB-D sensor. The capture and processing requires no artistic expertise to operate, takes 15 seconds to capture and generate a single facial expression, and approximately 1 minute of processing time per expression to transform it into a blendshape model. Our main contributions include a complete end-to-end pipeline for capturing and generating photorealistic blendshape models automatically and a registration method that solves dense correspondences between two face scans by utilizing facial landmarks detection and optical flows. We demonstrate the effectiveness of the proposed method by capturing different human subjects with a variety of sensors and puppeteering their 3D faces with real-time facial performance retargeting. The rapid nature of our method allows for just-in-time construction of a digital face. To that end, we also integrated our pipeline with a virtual reality facial performance capture system that allows dynamic embodiment of the generated faces despite partial occlusion of the user’s real face by the head-mounted display.},
	booktitle = {Proceedings of the 29th {International} {Conference} on {Computer} {Animation} and {Social} {Agents}},
	publisher = {ACM Press},
	author = {Casas, Dan and Feng, Andrew and Alexander, Oleg and Fyffe, Graham and Debevec, Paul and Ichikari, Ryosuke and Li, Hao and Olszewski, Kyle and Suma, Evan and Shapiro, Ari},
	month = may,
	year = {2016},
	keywords = {Graphics, MxR, UARC, Virtual Humans},
	pages = {121--129}
}
Powered by bibtexbrowser