AI Heap
Published on

SemEval-2025 Task 11: Bridging the Gap in Text-Based Emotion Detection

arXiv:2503.07269 - [arXiv,PDF]
Authors
  • Name
    Shamsuddeen Hassan Muhammad
  • Name
    Nedjma Ousidhoum
  • Name
    Idris Abdulmumin
  • Name
    Seid Muhie Yimam
  • Name
    Jan Philip Wahle
  • Name
    Terry Ruas
  • Name
    Meriem Beloucif
  • Name
    Christine De Kock
  • Name
    Tadesse Destaw Belay
  • Name
    Ibrahim Said Ahmad
  • Name
    Nirmal Surange
  • Name
    Daniela Teodorescu
  • Name
    David Ifeoluwa Adelani
  • Name
    Alham Fikri Aji
  • Name
    Felermino Ali
  • Name
    Vladimir Araujo
  • Name
    Abinew Ali Ayele
  • Name
    Oana Ignat
  • Name
    Alexander Panchenko
  • Name
    Yi Zhou
  • Name
    Saif M. Mohammad
  • Affiliation
    Imperial College London
  • Affiliation
    Cardiff University
  • Affiliation
    DSFSI, University of Pretoria
  • Affiliation
    University of Hamburg
  • Affiliation
    University of Göttingen
  • Affiliation
    Uppsala University
  • Affiliation
    University of Melbourne
  • Affiliation
    Instituto Politécnico Nacional
  • Affiliation
    Wollo University
  • Affiliation
    Northeastern University
  • Affiliation
    IIIT Hyderabad
  • Affiliation
    University of Alberta
  • Affiliation
    MILA
  • Affiliation
    McGill University
  • Affiliation
    Canada CIFAR AI Chair
  • Affiliation
    MBZUAI
  • Affiliation
    LIACC, FEUP, University of Porto
  • Affiliation
    Bahir Dar University
  • Affiliation
    National Research Council Canada
We present our shared task on text-based emotion detection, covering more than 30 languages from seven distinct language families. These languages are predominantly low-resource and are spoken across various continents. The data instances are multi-labeled with six emotional classes, with additional datasets in 11 languages annotated for emotion intensity. Participants were asked to predict labels in three tracks: (a) multilabel emotion detection, (b) emotion intensity score detection, and (c) cross-lingual emotion detection. The task attracted over 700 participants. We received final submissions from more than 200 teams and 93 system description papers. We report baseline results, along with findings on the best-performing systems, the most common approaches, and the most effective methods across different tracks and languages. The datasets for this task are publicly available. The dataset is available at SemEval2025 Task 11 https://brighter-dataset.github.io