D 2023

Text-to-Motion Retrieval: Towards Joint Understanding of Human Motion Data and Natural Language

MESSINA, Nicola, Jan SEDMIDUBSKÝ, Falchi FABRIZIO a Tomáš REBOK

Základní údaje

Originální název

Text-to-Motion Retrieval: Towards Joint Understanding of Human Motion Data and Natural Language

Autoři

MESSINA, Nicola (380 Itálie), Jan SEDMIDUBSKÝ (203 Česká republika, garant, domácí), Falchi FABRIZIO (380 Itálie) a Tomáš REBOK (203 Česká republika, domácí)

Vydání

New York, NY, USA, 46th International Conference on Research and Development in Information Retrieval (SIGIR), od s. 2420-2425, 6 s. 2023

Nakladatel

Association for Computing Machinery

Další údaje

Jazyk

angličtina

Typ výsledku

Stať ve sborníku

Obor

10200 1.2 Computer and information sciences

Stát vydavatele

Česká republika

Utajení

není předmětem státního či obchodního tajemství

Forma vydání

elektronická verze "online"

Odkazy

Kód RIV

RIV/00216224:14330/23:00130552

Organizační jednotka

Fakulta informatiky

ISBN

978-1-4503-9408-6

UT WoS

001118084002091

Klíčová slova anglicky

human motion data;skeleton sequences;CLIP;BERT;deep language models;ViViT;motion retrieval;cross-modal retrieval

Příznaky

Mezinárodní význam, Recenzováno
Změněno: 14. 3. 2024 13:10, doc. RNDr. Jan Sedmidubský, Ph.D.

Anotace

V originále

Due to recent advances in pose-estimation methods, human motion can be extracted from a common video in the form of 3D skeleton sequences. Despite wonderful application opportunities, effective and efficient content-based access to large volumes of such spatio-temporal skeleton data still remains a challenging problem. In this paper, we propose a novel content-based text-to-motion retrieval task, which aims at retrieving relevant motions based on a specified natural-language textual description. To define baselines for this uncharted task, we employ the BERT and CLIP language representations to encode the text modality and successful spatio-temporal models to encode the motion modality. We additionally introduce our transformer-based approach, called Motion Transformer (MoT), which employs divided space-time attention to effectively aggregate the different skeleton joints in space and time. Inspired by the recent progress in text-to-image/video matching, we experiment with two widely-adopted metric-learning loss functions. Finally, we set up a common evaluation protocol by defining qualitative metrics for assessing the quality of the retrieved motions, targeting the two recently-introduced KIT Motion-Language and HumanML3D datasets. The code for reproducing our results is available here: https://github.com/mesnico/text-to-motion-retrieval.

Návaznosti

EF16_019/0000822, projekt VaV
Název: Centrum excelence pro kyberkriminalitu, kyberbezpečnost a ochranu kritických informačních infrastruktur