The 31st International Conference on Computational Linguistics (COLING 2025) is being held in Abu Dhabi from January 18-24, hosted by MBZUAI. The conference features paper presentations, demonstrations, keynote speeches, workshops, and tutorials, with over 1,500 attendees. MBZUAI faculty and students contributed 22 papers to the conference, including research on fact-checking and cross-cultural content. Why it matters: Hosting COLING 2025 highlights the UAE's growing role as a hub for AI and NLP research, particularly in Arabic language processing.
The 31st International Conference on Computational Linguistics (COLING 2025) will be held in Abu Dhabi in January 2025, hosted by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). COLING is a major biennial NLP and AI conference that brings together leaders from research centers, academia, and industry. The conference will feature keynote talks, presentations, workshops, and tutorials, with 1,500 expected participants. Why it matters: Hosting COLING underscores the UAE's growing role in AI and NLP research and provides a platform to address regional linguistic challenges and advance AI technologies.
The first Workshop on Language Models for Low-Resource Languages (LoResLM 2025) was held in Abu Dhabi as part of COLING 2025. It provided a forum for researchers to share work on language models for low-resource languages. The workshop accepted 35 papers from 52 submissions, covering diverse languages and research areas.
The GenAI Content Detection Task 1 is a shared task on detecting machine-generated text, featuring monolingual (English) and multilingual subtasks. The task, part of the GenAI workshop at COLING 2025, attracted 36 teams for the English subtask and 26 for the multilingual one. The organizers provide a detailed overview of the data, results, system rankings, and analysis of the submitted systems.
ParlaMint is a CLARIN ERIC flagship project focused on harmonizing multilingual corpora of parliamentary sessions. The newest version, published in October 2023, covers 26 European parliaments with linguistic annotations and machine translations to English. Maciej Ogrodniczuk, Head of Linguistic Engineering Group at the Institute of Computer Science, Polish Academy of Sciences, presented the project. Why it matters: While focused on European parliaments, the ParlaMint project provides a valuable model and infrastructure for creating comparable Arabic parliamentary corpora, which could enhance Arabic NLP research and political analysis in the Middle East.
The InterText project, funded by the European Research Council, aims to advance NLP by developing a framework for modeling fine-grained relationships between texts. This approach enables tracing the origin and evolution of texts and ideas. Iryna Gurevych from the Technical University of Darmstadt presented the intertextual approach to NLP, covering data modeling, representation learning, and practical applications. Why it matters: This research could enable a new generation of AI applications for text work and critical reading, with potential applications in collaborative knowledge construction and document revision assistance.
This paper introduces a new task: detecting propaganda techniques in code-switched text. The authors created and released a corpus of 1,030 English-Roman Urdu code-switched texts annotated with 20 propaganda techniques. Experiments show the importance of directly modeling multilinguality and using the right fine-tuning strategy for this task.
This paper introduces two shared tasks for abusive and threatening language detection in Urdu, a low-resource language with over 170 million speakers. The tasks involve binary classification of Urdu tweets into Abusive/Non-Abusive and Threatening/Non-Threatening categories, respectively. Datasets of 2400/6000 training tweets and 1100/3950 testing tweets were created and manually annotated, along with logistic regression and BERT-based baselines. 21 teams participated and the best systems achieved F1-scores of 0.880 and 0.545 on the abusive and threatening language tasks, respectively, with m-BERT showing the best performance.