FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment

1School of Intelligence Science and Technology, University of Science and Technology Beijing, 2Wangxuan Institute of Computer Technology, Peking University
Image 3

Figure 1: An overview of FineParser. It enhances human-centric foreground action representations by exploiting fine-grained semantic consistency and spatial-temporal correlation between video frames, improving the AQA performance. Green, red, yellow, and blue dashed lines represent the fine-grained alignment of target actions between query and exemplar videos in time and space within the same semantics.

Abstract

Existing action quality assessment (AQA) methods mainly learn deep representations at the video level for scoring diverse actions. Due to the lack of a fine-grained understanding of actions in videos, they harshly suffer from low credibility and interpretability, thus insufficient for stringent applications, such as Olympic diving events. We argue that a fine-grained understanding of actions requires the model to perceive and parse actions in both time and space, which is also the key to the credibility and interpretability of the AQA technique. Based on this insight, we propose a new fine-grained spatial-temporal action parser named FineParser. It learns human-centric foreground action representations by focusing on target action regions within each frame and exploiting their fine-grained alignments in time and space to minimize the impact of invalid backgrounds during the assessment. In addition, we construct fine-grained annotations of human-centric foreground action masks for the FineDiving dataset, called FineDiving-HM. With refined annotations on diverse target action procedures, FineDiving-HM can promote the development of real-world AQA systems. Through extensive experiments, we demonstrate the effectiveness of FineParser, which outperforms state-of-the-art methods while supporting more tasks of fine-grained action understanding. Data and Code are available at https://github.com/PKU-ICST-MIPL/FineParser_CVPR2024.

Approach

Approach Image

Figure 2: The Architecture of the Proposed FineParser. Given a pair of query and exemplar videos, the spatial action parser (SAP) and temporal action parser (TAP) extract spatial-temporal representations of human-centric foreground actions in the pairwise videos. They also predict both target action masks and step transitions. The static visual encoder (SVE) captures static visual representations, which are then combined with the target action representation to mine more contextual details. Finally, fine-grained contrastive regression (FineREG) utilizes these comprehensive representations to predict the action score of the query video.

Dataset: FineDiving-HM

Approach Image

Figure 3: Examples of human-centric action mask annotations for the FineDiving dataset. The right line indicates the action type.

FineDiving-HM contains 312,256 mask frames covering 3,000 videos, in which each mask labels the target action region to distinguish the human-centric foreground and background. FineDiving-HM mitigates the problem of requiring frame-level annotations to understand human-centric actions from fine-grained spatial and temporal levels. We employ three workers with prior diving knowledge to double-check the annotations to control their quality. Figure 3 shows some examples of human-centric action mask annotations, which precisely focus on foreground target actions. There are 312,256 foreground action masks in FineDiving-HM, where the number of action masks for individual diving is 248,713 and that for synchronized diving is 63,543.

Results

Table 1: Comparisons of performance with state-of-the-art AQA methods on the FineDiving-HM Dataset. Our result is highlighted in the bold format.

Table1 Image

Table 2: Comparisons of performance with representative AQA methods on the MTL-AQA dataset. Our result is highlighted in the bold format.

Table2 Image

Visualization

Approach Image

Figure 5: Visualization of the predictions of target action masks produced by SAP. The predicted masks can focus on the target action regions in each frame, minimizing the impact of invalid backgrounds on action quality assessment.

Video 1 Video 2 Video 3 Video 4 Video 5 Video 6 Video 7 Video 8 Video 9 Video 10 Video 11 Video 12

FineParser parses video across temporal and spatial dimensions.

BibTeX

@misc{xu2024fineparser,
          title={FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment}, 
          author={Jinglin Xu and Sibo Yin and Guohao Zhao and Zishuo Wang and Yuxin Peng},
          year={2024},
          eprint={2405.06887},
          archivePrefix={arXiv},
          primaryClass={cs.CV}
    }