AI Heap
Published on

Refining Sentence Embedding Model through Ranking Sentences Generation with Large Language Models

arXiv:2502.13656 - [arXiv,PDF]
Authors
  • Name
    Liyang He
  • Name
    Chenglong Liu
  • Name
    Rui Li
  • Name
    Zhenya Huang
  • Name
    Shulan Ruan
  • Name
    Jun Zhou
  • Name
    Enhong Chen
  • Affiliation
    State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
  • Affiliation
    State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Affiliation
    Shenzhen International Graduate School, Tsinghua University
  • Affiliation
    Zhejiang University
Sentence embedding is essential for many NLP tasks, with contrastive learning methods achieving strong performance using annotated datasets like NLI. Yet, the reliance on manual labels limits scalability. Recent studies leverage large language models (LLMs) to generate sentence pairs, reducing annotation dependency. However, they overlook ranking information crucial for fine-grained semantic distinctions. To tackle this challenge, we propose a method for controlling the generation direction of LLMs in the latent space. Unlike unconstrained generation, the controlled approach ensures meaningful semantic divergence. Then, we refine exist sentence embedding model by integrating ranking information and semantic information. Experiments on multiple benchmarks demonstrate that our method achieves new SOTA performance with a modest cost in ranking sentence synthesis.