Skip to content
GCC AI Research

Search

Results for "COLING 2025"

Leading natural language processing conference to take place in Abu Dhabi

MBZUAI ·

The 31st International Conference on Computational Linguistics (COLING 2025) will be held in Abu Dhabi in January 2025, hosted by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). COLING is a major biennial NLP and AI conference that brings together leaders from research centers, academia, and industry. The conference will feature keynote talks, presentations, workshops, and tutorials, with 1,500 expected participants. Why it matters: Hosting COLING underscores the UAE's growing role in AI and NLP research and provides a platform to address regional linguistic challenges and advance AI technologies.

MBZUAI welcomes the world to Abu Dhabi as COLING 2025 opens

MBZUAI ·

The 31st International Conference on Computational Linguistics (COLING 2025) is being held in Abu Dhabi from January 18-24, hosted by MBZUAI. The conference features paper presentations, demonstrations, keynote speeches, workshops, and tutorials, with over 1,500 attendees. MBZUAI faculty and students contributed 22 papers to the conference, including research on fact-checking and cross-cultural content. Why it matters: Hosting COLING 2025 highlights the UAE's growing role as a hub for AI and NLP research, particularly in Arabic language processing.

GenAI Content Detection Task 1: English and Multilingual Machine-Generated Text Detection: AI vs. Human

arXiv ·

The GenAI Content Detection Task 1 is a shared task on detecting machine-generated text, featuring monolingual (English) and multilingual subtasks. The task, part of the GenAI workshop at COLING 2025, attracted 36 teams for the English subtask and 26 for the multilingual one. The organizers provide a detailed overview of the data, results, system rankings, and analysis of the submitted systems.

Six predictions for how AI will evolve in 2025

MBZUAI ·

MBZUAI's Provost, Tim Baldwin, provides six predictions for AI in 2025, highlighting the rise of agentic AI systems capable of performing actions on behalf of users. He notes the recent release of open-weight reasoning models like DeepSeek's R1 and OpenAI's o3-mini, emphasizing the dynamic nature of the field. Baldwin stresses the potential benefits of agentic AI, such as automating complex tasks like travel planning, while also cautioning about the need for careful deployment due to unforeseen outcomes. Why it matters: The predictions provide insight into the near-term trajectory of AI development and deployment, particularly regarding AI agents, and highlights the role of a UAE university in shaping the discussion around AI innovation.

NADI 2024: The Fifth Nuanced Arabic Dialect Identification Shared Task

arXiv ·

The fifth Nuanced Arabic Dialect Identification (NADI) 2024 shared task aimed to advance Arabic NLP through dialect identification and dialect-to-MSA machine translation. 51 teams registered, with 12 participating and submitting 76 valid submissions across three subtasks. The winning teams achieved 50.57 F1 for multi-label dialect identification, 0.1403 RMSE for dialectness level identification, and 20.44 BLEU for dialect-to-MSA translation. Why it matters: The results highlight the continued challenges in Arabic dialect processing and provide a benchmark for future research in this area.

AraFinNLP 2024: The First Arabic Financial NLP Shared Task

arXiv ·

The AraFinNLP 2024 shared task introduced two subtasks focused on Arabic financial NLP: multi-dialect intent detection and cross-dialect translation with intent preservation. It utilized the updated ArBanking77 dataset, containing 39k parallel queries in MSA and four dialects, labeled with 77 banking-related intents. 45 teams registered, with 11 participating in intent detection (achieving a top F1 score of 0.8773) and only 1 team attempting translation (achieving a BLEU score of 1.667). Why it matters: This initiative addresses the need for specialized Arabic NLP tools in the growing Arab financial sector, promoting advancements in areas like banking chatbots and machine translation.

Predicting and Explaining Cross-lingual Zero-shot and Few-shot Transfer in LLMs

MBZUAI ·

Project LITMUS explores predicting cross-lingual transfer accuracy in multilingual language models, even without test data in target languages. The goal is to estimate model performance in low-resource languages and optimize training data for desired cross-lingual performance. This research aims to identify factors influencing cross-lingual transfer, contributing to linguistically fair MMLMs. Why it matters: Improving cross-lingual transfer is vital for creating more equitable and effective multilingual AI systems, especially for languages with limited resources.