A framework for locally retargeting and rendering facial performance (bibtex)
by Ko-Yun Liu, Wan-Chun Ma, Chun-Fa Chang, Chuan-Chang Wang, Paul Debevec
Abstract:
We present a facial motion retargeting method that enables the control of a blendshape rig according to marker-based motion capture data. The main purpose of the proposed technique is to allow a blendshape rig to create facial expressions, which conforms best to the current motion capture input, regardless the underlying blendshape poses. In other words, even though all of the blendshape poses may comprise symmetrical facial expressions only, our method is still able to create asymmetrical expressions without physically splitting any of them into more local blendshape poses. An automatic segmentation technique based on the analysis of facial motion is introduced to create facial regions for local retargeting. We also show that it is possible to blend normal maps for rendering in the same framework. Rendering with the blended normal map significantly improves surface appearance and details.
Reference:
A framework for locally retargeting and rendering facial performance (Ko-Yun Liu, Wan-Chun Ma, Chun-Fa Chang, Chuan-Chang Wang, Paul Debevec), In Computer Animation and Virtual Worlds, volume 22, 2011.
Bibtex Entry:
@inproceedings{liu_framework_2011,
	title = {A framework for locally retargeting and rendering facial performance},
	volume = {22},
	url = {http://ict.usc.edu/pubs/A%20Framework%20for%20Locally%20Retargeting%20and%20Rendering%20Facial%20Performance.pdf},
	abstract = {We present a facial motion retargeting method that enables the control of a blendshape rig according to marker-based motion capture data. The main purpose of the proposed technique is to allow a blendshape rig to create facial expressions, which conforms best to the current motion capture input, regardless the underlying blendshape poses. In other words, even though all of the blendshape poses may comprise symmetrical facial expressions only, our method is still able to create asymmetrical expressions without physically splitting any of them into more local blendshape poses. An automatic segmentation technique based on the analysis of facial motion is introduced to create facial regions for local retargeting. We also show that it is possible to blend normal maps for rendering in the same framework. Rendering with the blended normal map significantly improves surface appearance and details.},
	booktitle = {Computer {Animation} and {Virtual} {Worlds}},
	author = {Liu, Ko-Yun and Ma, Wan-Chun and Chang, Chun-Fa and Wang, Chuan-Chang and Debevec, Paul},
	month = apr,
	year = {2011},
	keywords = {Graphics},
	pages = {159--167}
}
Powered by bibtexbrowser