Head-mounted Photometric Stereo for Performance Capture (bibtex)
by Andrew Jones, Graham Fyffe, Xueming Yu, Wan-Chun Ma, Jay Busch, Ryosuke Ichikari, Mark Bolas, Paul Debevec
Abstract:
Head-mounted cameras are an increasingly important tool for capturing facial performances to drive virtual characters. They provide a fixed, unoccluded view of the face, useful for observing motion capture dots or as input to video analysis. However, the 2D imagery captured with these systems is typically affected by ambient light and generally fails to record subtle 3D shape changes as the face performs. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and generates per-pixel surface normals so that the performance is recorded dynamically in 3D. The resulting data can be used for facial relighting or as better input to machine learning algorithms for driving an animated face.
Reference:
Head-mounted Photometric Stereo for Performance Capture (Andrew Jones, Graham Fyffe, Xueming Yu, Wan-Chun Ma, Jay Busch, Ryosuke Ichikari, Mark Bolas, Paul Debevec), In 8th European Conference on Visual Media Production (CVMP 2011), 2011.
Bibtex Entry:
@inproceedings{jones_head-mounted_2011,
	address = {London, UK},
	title = {Head-mounted {Photometric} {Stereo} for {Performance} {Capture}},
	url = {http://ict.usc.edu/pubs/Head-mounted%20Photometric%20Stereo%20for%20Performance%20Capture.pdf},
	abstract = {Head-mounted cameras are an increasingly important tool for capturing facial performances to drive virtual characters. They provide a fixed, unoccluded view of the face, useful for observing motion capture dots or as input to video analysis. However, the 2D imagery captured with these systems is typically affected by ambient light and generally fails to record subtle 3D shape changes as the face performs. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and generates per-pixel surface normals so that the performance is recorded dynamically in 3D. The resulting data can be used for facial relighting or as better input to machine learning algorithms for driving an animated face.},
	booktitle = {8th {European} {Conference} on {Visual} {Media} {Production} ({CVMP} 2011)},
	author = {Jones, Andrew and Fyffe, Graham and Yu, Xueming and Ma, Wan-Chun and Busch, Jay and Ichikari, Ryosuke and Bolas, Mark and Debevec, Paul},
	month = nov,
	year = {2011},
	keywords = {Graphics, MxR}
}
Powered by bibtexbrowser