Skip to content
GCC AI Research

Search

Results for "ACL"

MBZUAI at ACL2023

MBZUAI ·

MBZUAI researchers had 26 papers accepted at ACL 2023, a top NLP conference. Assistant Professor Alham Fikri Aji co-authored eight papers, including one on crosslingual generalization through multitask finetuning (MTF). Deputy Department Chair Preslav Nakov co-authored a paper on a Bulgarian language understanding benchmark dedicated to the memory of Yale Computer Scientist Dragomir R. Radev. Why it matters: MBZUAI's strong presence at ACL highlights its growing influence in the NLP field and its contributions to multilingual AI research.

Baldwin headlines ACL 2022

MBZUAI ·

MBZUAI Professor Timothy Baldwin delivered the presidential keynote at the 60th Annual Meeting of the Association for Computational Linguistics (ACL). Baldwin also published three papers at the conference, including work on biomedical literature summarization, NLP for Indonesian languages, and understanding procedural texts. The papers address challenges such as reducing human effort in reviewing medical documents and digitally preserving Indonesian indigenous languages. Why it matters: Baldwin's contributions and leadership role at ACL highlight the growing prominence of MBZUAI and GCC-based researchers in the global NLP community.

A new standard for evaluating Arabic language models presented at ACL

MBZUAI ·

MBZUAI researchers have created ArabicMMLU, the first benchmark dataset in Modern Standard Arabic for evaluating language understanding across multiple tasks. The dataset contains over 14,000 multiple-choice questions from school exams across the Arabic-speaking world and addresses the limitations of translated English datasets. It was presented at the 62nd Annual Meeting of the Association for Computational Linguistics in Bangkok. Why it matters: This benchmark enables a more accurate and culturally relevant evaluation of LLMs' capabilities in Arabic, which is crucial for developing AI tailored to the Arab world.

Commonsense Reasoning in Arab Culture

arXiv ·

A new dataset called ArabCulture is introduced to address the lack of culturally relevant commonsense reasoning resources in Arabic AI. The dataset covers 13 countries across the Gulf, Levant, North Africa, and the Nile Valley, spanning 12 daily life domains with 54 fine-grained subtopics. It was built from scratch by native speakers writing and validating culturally relevant questions. Why it matters: The dataset highlights the need for more culturally aware models and benchmarks tailored to the Arabic-speaking world, moving beyond machine-translated resources.

AceGPT, Localizing Large Language Models in Arabic

arXiv ·

Researchers introduce AceGPT, a localized large language model (LLM) specifically for Arabic, addressing cultural sensitivity and local values not well-represented in mainstream models. AceGPT incorporates further pre-training with Arabic texts, supervised fine-tuning using native Arabic instructions and GPT-4 responses, and reinforcement learning with AI feedback using a reward model attuned to local culture. Evaluations demonstrate that AceGPT achieves state-of-the-art performance among open Arabic LLMs across several benchmarks. Why it matters: This work advances culturally-aware AI development for Arabic-speaking communities, providing a valuable resource and benchmark for future research.

Testing the limits of vision language models: A new benchmark dataset presented at ACL

MBZUAI ·

MBZUAI researchers presented EXAMS-V, a new benchmark dataset for evaluating the reasoning and processing abilities of vision language models (VLMs). EXAMS-V contains over 20,000 multiple-choice questions across 26 subjects and 11 languages, including Arabic. The dataset presents the questions within images, testing the VLM's ability to integrate visual and textual information. Why it matters: This dataset fills a gap in VLM evaluation, providing a valuable resource for assessing and improving the multimodal reasoning capabilities of these models, particularly in diverse languages like Arabic.

How jailbreak attacks work and a new way to stop them

MBZUAI ·

Researchers at MBZUAI and other institutions have published a study at ACL 2024 investigating how jailbreak attacks work on LLMs. The study used a dataset of 30,000 prompts and non-linear probing to interpret the effects of jailbreak attacks, finding that existing interpretations were inadequate. The researchers propose a new approach to improve LLM safety against such attacks by identifying the layers in neural networks where the behavior occurs. Why it matters: Understanding and mitigating jailbreak attacks is crucial for ensuring the responsible and secure deployment of LLMs, particularly in the Arabic-speaking world where these models are increasingly being used.