LIZ CARTER, PH.D.
Human-Computer Interaction and Human-Robot Interaction Research
Uncanny Valley and Animation Research
I examined how viewers perceive real and animated facial and vocal expressions.
Modeling and Animating Eye Blinks
Animated blinks generated from the human data model with fully closing eyelids are consistently perceived as more natural than those created using the various types of blink dynamics proposed in animation textbooks.
Trutoiu, L. C., Carter, E. J., Matthews, I., & Hodgins, J. K. (2011). Modeling and animating eye blinks. ACM Transactions on Applied Perception (TAP), 8(3), 17. (link)
Voice Synchronization in Film
When temporal synchronization is noticeably absent, there is a decrease in the perceived performance quality and the perceived emotional intensity of a performance.
Carter, E. J., Sharan, L., Trutoiu, L., Matthews, I., & Hodgins, J. K. (2010). Perceptually motivated guidelines for voice synchronization in film. ACM Transactions on Applied Perception (TAP), 7(4), 23. (link)
Uncanny Valley Literature Review
Carter, E.J., & Pollick, F.E. (2014). Not quite human: What virtual characters have taught us about person perception. In Grimshaw, M. (Ed.), The Oxford Handbook of Virtuality. Oxford: Oxford University Press.
Viewing Patterns for Animated Faces
Viewers look slightly longer at faces of animated characters that they find unpleasant.
Carter, E. J., Mahler, M., & Hodgins, J. K. (2013). Unpleasantness of animated characters corresponds to increased viewer attention to faces. Proceedings of the ACM Symposium on Applied Perception (pp. 35-40). (link)
Posed and Spontaneous Smile Dynamics
Both space and time linearizations impact the perceived genuineness of posed and spontaneous smiles.
Trutoiu, L. C., Carter, E. J., Pollard, N., Cohn, J. F., & Hodgins, J. K. (2014). Spatial and temporal linearities in posed and spontaneous smiles. ACM Transactions on Applied Perception (TAP), 11(3), 12. (link)
A Perceptual Control Space for Garment Simulation
We created a perceptual control space for simulation of cloth that provides intuitive, art-directable control over the simulation behavior based on a learned mapping from common discriptors for cloth to the parameters of the simulation.
Sigal, L., Mahler, M., Diaz, S., McIntosh, K., Carter, E., Richards, T., & Hodgins, J. (2015). A perceptual control space for garment simulation. ACM Transactions on Graphics (TOG), 34(4), 117. (link)