AI Heap
Published on

Sliding Window Attention Training for Efficient Large Language Models

arXiv:2502.18845 - [arXiv,PDF]
Authors
  • Name
    Zichuan Fu
  • Name
    Wentao Song
  • Name
    Yejing Wang
  • Name
    Xian Wu
  • Name
    Yefeng Zheng
  • Name
    Yingying Zhang
  • Name
    Derong Xu
  • Name
    Xuetao Wei
  • Name
    Tong Xu
  • Name
    Xiangyu Zhao
  • Affiliation
    City University of Hong Kong
  • Affiliation
    Xi’an Jiaotong University
  • Affiliation
    Jarvis Research Center, Tencent YouTu Lab
  • Affiliation
    Jarvis Research Center, Tencent YouTu Lab, Westlake University
  • Affiliation
    City University of Hong Kong, University of Science and Technology of China
  • Affiliation
    Southern University of Science and Technology
  • Affiliation
    University of Science and Technology of China
Recent advances in transformer-based Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks. However, their quadratic computational complexity concerning sequence length remains a significant bottleneck for processing long documents. As a result, many efforts like sparse attention and state space models have been proposed to improve the efficiency of LLMs over long sequences. Though effective, these approaches compromise the performance or introduce structural complexity. This calls for a simple yet efficient model that preserves the fundamental Transformer architecture. To this end, we introduce SWAT, which enables efficient long-context handling via Sliding Window Attention Training. This paper first attributes the inefficiency of Transformers to the attention sink phenomenon resulting from the high variance of softmax operation. Then, we replace softmax with the sigmoid function and utilize a balanced ALiBi and Rotary Position Embedding for efficient information compression and retention. Experiments demonstrate that SWAT achieves SOTA performance compared with state-of-the-art linear recurrent architectures on eight benchmarks. Code is available at https://github.com/Fzkuji/swat-attention.