IVA 2022: Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational Agents
This is a research presentation from the International Conference on Intelligent Virtual Agents 2022.
Authors: Yuan He (KTH), André Pereira (KTH), Taras Kucherenk (SEED)
Download the full research paper. (2 MB PDF)
Do co-speech gestures really affect how people perceive animated characters? And if so, how do you measure their effectiveness?
Embodied Conversational Agents (ECAs) that make use of co-speech gestures can enhance human-machine interactions in many ways. In recent years, data-driven gesture generation approaches for ECAs have attracted considerable research attention, and related methods have continuously improved. Real-time interaction is typically used when researchers evaluate ECA systems that generate rule-based gestures. However, when evaluating the performance of ECAs based on data-driven methods, participants are often required only to watch pre-recorded videos, which cannot provide adequate information about what a person perceives during the interaction.
To address this limitation, this paper explored use of real-time interaction to assess data-driven gesturing ECAs. We provided a testbed framework, and investigated whether gestures could affect human perception of ECAs in the dimensions of human-likeness, animacy, perceived intelligence, and focused attention.
Our user study required participants to interact with two ECAs – one with and one without hand gestures. We collected subjective data from the participants’ self-report questionnaires and objective data from a gaze tracker. To our knowledge, the current study represents the first attempt to evaluate data-driven gesturing ECAs through real-time interaction and the first experiment using gaze-tracking to examine the effect of ECAs’ gestures.