AI Heap
Published on

Alt-MoE: Multimodal Alignment via Alternating Optimization of Multi-directional MoE with Unimodal Models

arXiv:2409.05929 - [arXiv,PDF]
Authors
  • Name
    Hongyang Lei
  • Name
    Xiaolong Cheng
  • Name
    Dan Wang
  • Name
    Kun Fan
  • Name
    Qi Qin
  • Name
    Huazhen Huang
  • Name
    Yetao Wu
  • Name
    Qingqing Gu
  • Name
    Zhonglin Jiang
  • Name
    Yong Chen
  • Name
    Luo Ji
  • Affiliation
    Geely Automobile Research Institute (Ningbo) Co., Ltd
  • Affiliation
  • Affiliation
    Peking University
  • Affiliation
    Shenzhen Institute of Advanced Technology, CAS
Recent Large Multi-Modal Models (LMMs) have made significant advancements in multi-modal alignment by employing lightweight connection modules to facilitate the representation and fusion of knowledge from existing pre-trained uni-modal models. However, these methods still rely on modality-specific and direction-specific connectors, leading to compartmentalized knowledge representations and reduced computational efficiency, which limits the model’s ability to form unified multi-modal representations. To address these issues, we introduce a novel training framework, Alt-MoE, which employs the Mixture of Experts (MoE) as a unified multi-directional connector across modalities, and employs a multi-step sequential alternating unidirectional alignment strategy, which converges to bidirectional alignment over iterations. The extensive empirical studies revealed the following key points: 1) Alt-MoE achieves competitive results by integrating diverse knowledge representations from uni-modal models. This approach seamlessly fuses the specialized expertise of existing high-performance uni-modal models, effectively synthesizing their domain-specific knowledge into a cohesive multi-modal representation. 2) Alt-MoE efficiently scales to new tasks and modalities without altering its model architecture or training strategy. Furthermore, Alt-MoE operates in latent space, supporting vector pre-storage and real-time retrieval via lightweight multi-directional MoE, thereby facilitating massive data processing. Our methodology has been validated on several well-performing uni-modal models (LLAMA3, Qwen2, and DINOv2), achieving competitive results on a wide range of downstream tasks and datasets.