Rapid Photorealistic Blendshapes from Commodity RGB-D Sensors (bibtex)
by Dan Casas, Oleg Alexander, Andrew W. Feng, Graham Fyffe, Ryosuke Ichikari, Paul Debevec, Rhuizhe Wang, Evan Suma, Ari Shapiro
Abstract:
Creating and animating a realistic 3D human face has been an important task in computer graphics. The capability of capturing the 3D face of a human subject and reanimate it quickly will find many applications in games, training simulations, and interactive 3D graphics. In this paper, we propose a system to capture photorealistic 3D faces and generate the blendshape models automatically using only a single commodity RGB-D sensor. Our method can rapidly generate a set of expressive facial poses from a single Microsoft Kinect and requires no artistic expertise on the part of the capture subject. The system takes only a matter of seconds to capture and produce a 3D facial pose and only requires 4 minutes of processing time to transform it into a blendshape model. Our main contributions include an end-to-end pipeline for capturing and generating face blendshape models automatically, and a registration method that solves dense correspondences between two face scans by utilizing facial landmark detection and optical flow. We demonstrate the effectiveness of the proposed method by capturing 3D facial models of different human subjects and puppeteering their models in an animation system with real-time facial performance retargeting.
Reference:
Rapid Photorealistic Blendshapes from Commodity RGB-D Sensors (Dan Casas, Oleg Alexander, Andrew W. Feng, Graham Fyffe, Ryosuke Ichikari, Paul Debevec, Rhuizhe Wang, Evan Suma, Ari Shapiro), In Proceedings of the 19th Symposium on Interactive 3D Graphics and Games, ACM Press, 2015.
Bibtex Entry:
@inproceedings{casas_rapid_2015,
	address = {San Francisco, CA},
	title = {Rapid {Photorealistic} {Blendshapes} from {Commodity} {RGB}-{D} {Sensors}},
	isbn = {978-1-4503-3392-4},
	url = {http://dl.acm.org/citation.cfm?doid=2699276.2721398},
	doi = {10.1145/2699276.2721398},
	abstract = {Creating and animating a realistic 3D human face has been an important task in computer graphics. The capability of capturing the 3D face of a human subject and reanimate it quickly will find many applications in games, training simulations, and interactive 3D graphics. In this paper, we propose a system to capture photorealistic 3D faces and generate the blendshape models automatically using only a single commodity RGB-D sensor. Our method can rapidly generate a set of expressive facial poses from a single Microsoft Kinect and requires no artistic expertise on the part of the capture subject. The system takes only a matter of seconds to capture and produce a 3D facial pose and only requires 4 minutes of processing time to transform it into a blendshape model. Our main contributions include an end-to-end pipeline for capturing and generating face blendshape models automatically, and a registration method that solves dense correspondences between two face scans by utilizing facial landmark detection and optical flow. We demonstrate the effectiveness of the proposed method by capturing 3D facial models of different human subjects and puppeteering their models in an animation system with real-time facial performance retargeting.},
	booktitle = {Proceedings of the 19th {Symposium} on {Interactive} 3D {Graphics} and {Games}},
	publisher = {ACM Press},
	author = {Casas, Dan and Alexander, Oleg and Feng, Andrew W. and Fyffe, Graham and Ichikari, Ryosuke and Debevec, Paul and Wang, Rhuizhe and Suma, Evan and Shapiro, Ari},
	month = feb,
	year = {2015},
	keywords = {Graphics, MxR, UARC, Virtual Humans},
	pages = {134--134}
}
Powered by bibtexbrowser