AI Heap
Published on

Multi-Scale Contrastive Learning for Video Temporal Grounding

arXiv:2412.07157 - [arXiv,PDF]
Authors
  • Name
    Thong Thanh Nguyen
  • Name
    Yi Bin
  • Name
    Xiaobao Wu
  • Name
    Zhiyuan Hu
  • Name
    Cong-Duy T Nguyen
  • Name
    See-Kiong Ng
  • Name
    Anh Tuan Luu
  • Affiliation
    1
  • Affiliation
    1,2
  • Affiliation
    3
  • Affiliation
Temporal grounding, which localizes video moments related to a natural language query, is a core problem of vision-language learning and video understanding. To encode video moments of varying lengths, recent methods employ a multi-level structure known as a feature pyramid. In this structure, lower levels concentrate on short-range video moments, while higher levels address long-range moments. Because higher levels experience downsampling to accommodate increasing moment length, their capacity to capture information is reduced and consequently leads to degraded information in moment representations. To resolve this problem, we propose a contrastive learning framework to capture salient semantics among video moments. Our key methodology is to leverage samples from the feature space emanating from multiple stages of the video encoder itself requiring neither data augmentation nor online memory banks to obtain positive and negative samples. To enable such an extension, we introduce a sampling process to draw multiple video moments corresponding to a common query. Subsequently, by utilizing these moments’ representations across video encoder layers, we instantiate a novel form of multi-scale and cross-scale contrastive learning that links local short-range video moments with global long-range video moments. Extensive experiments demonstrate the effectiveness of our framework for not only long-form but also short-form video grounding.