SEED Research & Announcements Blogs Publications Open Source Careers Contact Us Research & Announcements Blogs Publications Open Source Careers Contact Us

GENEA Challenge 2023: Evaluating Gesture Generation Models in Monadic and Dyadic Settings

This research paper was accepted for publication by the 25th ACM International Conference on Multimodal Interaction.

Authors: Taras Kucherenko (SEED), Rajmund Nagy (KTH), Youngwoo Yoon (ETRI), Jieyeon Woo (ISIR), Teodor Nikolov (Umeå), Mihail Tsakov (Umeå), and Gustav Eje Henter (KTH).

GENEA Challenge 2023: A Large-Scale Evaluation of Gesture Generation Models in Monadic and Dyadic Settings

Download the full research paper. (2.7 MB PDF)

This paper reports on the third GENEA Challenge, which benchmarks data-driven automatic co-speech gesture generation.

In the GENEA Challenge, participating teams built speech-driven gesture-generation systems followed by a joint evaluation. This year’s challenge provided data on both sides of a dyadic interaction, allowing teams to generate full-body motion for an agent given its speech (text and audio) and the speech and motion of the interlocutor.

We evaluated 12 submissions and two baselines together with held-out motion-capture data in several large-scale user studies.

The studies focused on three aspects:

  1. The human-likeness of the motion.
  2. The appropriateness of the motion for the agent’s own speech whilst controlling for the human-likeness of the motion.
  3. The appropriateness of the motion for the behavior of the interlocutor in the interaction

The challenge used a setup that controls for both the human-likeness of the motion and the agent’s own speech.

We found a large span in human-likeness between challenge submissions, with a few systems rated close to human mocap. Appropriateness seems far from being solved, with most submissions performing in a narrow range slightly above chance, far behind natural motion. The effect of the interlocutor is even more subtle, with submitted systems at best performing barely above chance. Interestingly, a dyadic system being highly appropriate for agent speech does not necessarily imply high appropriateness for the interlocutor.

Additional material is available via the project website at svito-zar.github.io/GENEAchallenge2023/

Related News

Improving Generalization in Game Agents with Imitation Learning

SEED
Jul 16, 2024
How do we efficiently train in-game AI agents to handle new situations that they haven’t been trained on?

Towards Optimal Training Distribution for Photo-to-Face Models

SEED
Jul 8, 2024
How do we best construct game avatars from photos? This presentation discusses a work in progress with an optimized view of the training data.

Incorporating ML Research Into Audio Production: ExFlowSions Case Study

SEED
Jun 25, 2024
Mónica Villanueva and Jorge García present the challenges and lessons learned from turning a machine learning generative model from a research project into a game production tool.