AI Heap
Published on

Identifying Query-Relevant Neurons in Large Language Models for Long-Form Texts

arXiv:2406.10868 - [arXiv,PDF]
Authors
  • Name
    Lihu Chen
  • Name
    Adam Dejl
  • Name
    Francesca Toni
  • Affiliation
    Department of Computer Science, University of XYZ
  • Affiliation
    Department of Mathematics, University of ABC
  • Affiliation
    Department of Artificial Intelligence, University of DEF
Large Language Models (LLMs) possess vast amounts of knowledge within their parameters, prompting research into methods for locating and editing this knowledge. Previous work has largely focused on locating entity-related (often single-token) facts in smaller models. However, several key questions remain unanswered: (1) How can we effectively locate query-relevant neurons in decoder-only LLMs, such as Llama and Mistral? (2) How can we address the challenge of long-form (or free-form) text generation? (3) Are there localized knowledge regions in LLMs? In this study, we introduce Query-Relevant Neuron Cluster Attribution (QRNCA), a novel architecture-agnostic framework capable of identifying query-relevant neurons in LLMs. QRNCA allows for the examination of long-form answers beyond triplet facts by employing the proxy task of multi-choice question answering. To evaluate the effectiveness of our detected neurons, we build two multi-choice QA datasets spanning diverse domains and languages. Empirical evaluations demonstrate that our method outperforms baseline methods significantly. Further, analysis of neuron distributions reveals the presence of visible localized regions, particularly within different domains. Finally, we show potential applications of our detected neurons in knowledge editing and neuron-based prediction.