Skip to content
GCC AI Research

Topics

Dataset

39 articles RSS ↗

DuwatBench: Bridging Language and Visual Heritage through an Arabic Calligraphy Benchmark for Multimodal Understanding

arXiv · · CV NLP

MBZUAI researchers introduce DuwatBench, a new benchmark for multimodal understanding of Arabic calligraphy. The dataset contains 1,272 samples across six calligraphic styles with detailed annotations to evaluate visual-text alignment. Evaluation of 13 multimodal models reveals challenges in processing calligraphic variations and artistic distortions, highlighting the need for culturally grounded AI research.

Continuous Saudi Sign Language Recognition: A Vision Transformer Approach

arXiv · · NLP CV

The researchers introduce KAU-CSSL, the first continuous Saudi Sign Language (SSL) dataset focusing on complete sentences. They propose a transformer-based model using ResNet-18 for spatial feature extraction and a Transformer Encoder with Bidirectional LSTM for temporal dependencies. The model achieved 99.02% accuracy in signer-dependent mode and 77.71% in signer-independent mode, advancing communication tools for the SSL community.

Dhati+: Fine-tuned Large Language Models for Arabic Subjectivity Evaluation

arXiv · · NLP Arabic AI

This paper introduces AraDhati+, a new comprehensive dataset for Arabic subjectivity analysis created by combining existing datasets like ASTD, LABR, HARD, and SANAD. The researchers fine-tuned Arabic language models including XLM-RoBERTa, AraBERT, and ArabianGPT on AraDhati+ for subjectivity classification. An ensemble decision approach achieved 97.79% accuracy. Why it matters: The work addresses the under-resourced nature of Arabic NLP by providing a new dataset and demonstrating strong results in subjectivity classification, advancing sentiment analysis capabilities for the Arabic language.

SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs

arXiv · · NLP LLM

The Qatar Computing Research Institute (QCRI) has released SpokenNativQA, a multilingual spoken question-answering dataset for evaluating LLMs in conversational settings. The dataset contains 33,000 naturally spoken questions and answers across multiple languages, including low-resource and dialect-rich languages. It aims to address the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. Why it matters: This benchmark enables more robust evaluation of LLMs in speech-based interactions, particularly for Arabic dialects and other low-resource languages.

Palm: A Culturally Inclusive and Linguistically Diverse Dataset for Arabic LLMs

arXiv · · NLP LLM

A new culturally inclusive and linguistically diverse dataset called Palm for Arabic LLMs is introduced, covering 22 Arab countries and featuring instructions in both Modern Standard Arabic (MSA) and dialectal Arabic (DA) across 20 topics. The dataset was built through a year-long community-driven project involving 44 researchers from across the Arab world. Evaluation of frontier LLMs using the dataset reveals limitations in cultural and dialectal understanding, with some countries being better represented than others.

Commonsense Reasoning in Arab Culture

arXiv · · NLP Arabic AI

A new dataset called ArabCulture is introduced to address the lack of culturally relevant commonsense reasoning resources in Arabic AI. The dataset covers 13 countries across the Gulf, Levant, North Africa, and the Nile Valley, spanning 12 daily life domains with 54 fine-grained subtopics. It was built from scratch by native speakers writing and validating culturally relevant questions. Why it matters: The dataset highlights the need for more culturally aware models and benchmarks tailored to the Arabic-speaking world, moving beyond machine-translated resources.

MultiProSE: A Multi-label Arabic Dataset for Propaganda, Sentiment, and Emotion Detection

arXiv · · NLP Arabic AI

The paper introduces MultiProSE, the first multi-label Arabic dataset for propaganda, sentiment, and emotion detection. It extends the existing ArPro dataset with sentiment and emotion annotations, resulting in 8,000 annotated news articles. Baseline models, including GPT-4o-mini and BERT-based models, were developed for each task, and the dataset, guidelines, and code are publicly available. Why it matters: This resource enables further research into Arabic language models and a better understanding of opinion dynamics within Arabic news media.

NativQA: Multilingual Culturally-Aligned Natural Query for LLMs

arXiv · · NLP LLM

The paper introduces NativQA, a language-independent framework for constructing culturally and regionally aligned QA datasets in native languages. Using the framework, the authors created MultiNativQA, a multilingual natural QA dataset consisting of ~64k manually annotated QA pairs in seven languages. The dataset covers queries from native speakers from 9 regions covering 18 topics, and is designed for evaluating and tuning LLMs. Why it matters: The framework and dataset enable the creation of more culturally relevant and effective LLMs for diverse linguistic communities, including those in the Middle East.

GemmAr: Enhancing LLMs Through Arabic Instruction-Tuning

arXiv · · NLP LLM

The paper introduces InstAr-500k, a new Arabic instruction dataset of 500,000 examples designed to improve LLM performance in Arabic. Researchers fine-tuned the open-source Gemma-7B model using InstAr-500k and evaluated it on downstream tasks, achieving strong results on Arabic NLP benchmarks. They then released GemmAr-7B-V1, a model specifically tuned for Arabic NLP tasks. Why it matters: This work addresses the lack of high-quality Arabic instruction data, potentially boosting the capabilities of Arabic language models.

101 Billion Arabic Words Dataset

arXiv · · NLP LLM

Researchers compiled a 101 Billion Arabic Words Dataset by mining text from Common Crawl WET files and rigorously cleaning and deduplicating the extracted content. The dataset aims to address the scarcity of original, high-quality Arabic linguistic data, which often leads to bias in Arabic LLMs that rely on translated English data. This is the largest Arabic dataset available to date. Why it matters: The new dataset can significantly contribute to the development of authentic Arabic LLMs that are more linguistically and culturally accurate.

ArabicaQA: A Comprehensive Dataset for Arabic Question Answering

arXiv · · NLP Research

Researchers introduce ArabicaQA, a large-scale dataset for Arabic question answering, comprising 89,095 answerable and 3,701 unanswerable questions. They also present AraDPR, a dense passage retrieval model trained on the Arabic Wikipedia. The paper includes benchmarking of large language models (LLMs) for Arabic question answering. Why it matters: This work addresses a significant gap in Arabic NLP resources and provides valuable tools and benchmarks for advancing research in the field.

Exploring Sound vs Vibration for Robust Fault Detection on Rotating Machinery

arXiv · · Research Robotics

The study introduces the Qatar University Dual-Machine Bearing Fault Benchmark dataset (QU-DMBF) containing sound and vibration data from two motors across 1080 conditions. It proposes a deep learning approach for sound-based fault detection, addressing limitations of vibration-based methods. Experiments on QU-DMBF show sound-based detection is more robust, independent of sensor location, and cost-effective while matching vibration-based performance. Why it matters: The new dataset and findings could shift the focus toward sound-based methods for more reliable and accessible predictive maintenance in industrial settings.

Race Against the Machine: a Fully-annotated, Open-design Dataset of Autonomous and Piloted High-speed Flight

arXiv · · Robotics RL

Researchers at the Technology Innovation Institute (TII) have released a fully-annotated dataset for autonomous drone racing, called "Race Against the Machine." The dataset includes high-resolution visual, inertial, and motion capture data from both autonomous and piloted flights, along with commands, control inputs, and corner-level labeling of drone racing gates. The specifications to recreate their flight platform using commercial off-the-shelf components and the Betaflight controller are also released. Why it matters: This comprehensive resource aims to support the development of new methods and establish quantitative comparisons for approaches in robotics and AI, democratizing drone racing research.

SlimPajama-DC: Understanding Data Combinations for LLM Training

arXiv · · LLM Research

Researchers at MBZUAI release SlimPajama-DC, an empirical analysis of data combinations for pretraining LLMs using the SlimPajama dataset. The study examines the impact of global vs. local deduplication and the proportions of highly-deduplicated multi-source datasets. Results show that increased data diversity after global deduplication is crucial, with the best configuration outperforming models trained on RedPajama.

QASR: QCRI Aljazeera Speech Resource -- A Large Scale Annotated Arabic Speech Corpus

arXiv · · NLP Arabic AI

The Qatar Computing Research Institute (QCRI) has released QASR, a 2,000-hour transcribed Arabic speech corpus collected from Aljazeera news broadcasts. The dataset features multi-dialect speech sampled at 16kHz, aligned with lightly supervised transcriptions and linguistically motivated segmentation. QCRI also released a 130M word dataset to improve language model training. Why it matters: QASR enables new research in Arabic speech recognition, dialect identification, punctuation restoration, and other NLP tasks for spoken data.

Studying the History of the Arabic Language: Language Technology and a Large-Scale Historical Corpus

arXiv · · NLP Arabic AI

This paper introduces a large-scale historical corpus of written Arabic spanning 1400 years. The corpus was cleaned and processed using Arabic NLP tools, including identification of reused text. The study uses a novel automatic periodization algorithm to study the history of the Arabic language, confirming the division into Modern Standard and Classical Arabic. Why it matters: This resource enables further computational research into the evolution of Arabic and the development of NLP tools for historical texts.

Detecting deepfakes in the presence of code-switching

MBZUAI · · NLP Arabic AI

MBZUAI researchers, in collaboration with Monash University, have introduced ArEnAV, a new dataset for deepfake detection featuring Arabic-English code-switching. The dataset comprises 765 hours of manipulated YouTube videos, incorporating intra-utterance code-switching and dialect variations. Experiments showed that code-switching significantly reduces the performance of existing deepfake detectors. Why it matters: This work addresses a critical gap in AI's ability to handle linguistic diversity, particularly in regions where code-switching is prevalent, enhancing the reliability of deepfake detection in real-world scenarios.

Web2Code: A new dataset to enhance multimodal LLM performance presented at NeurIPS

MBZUAI · · NLP LLM

MBZUAI researchers introduced Web2Code, a new dataset suite, at NeurIPS to enhance multimodal LLM performance in web page analysis and HTML generation. The suite includes a fine-tuning dataset and two benchmark datasets. Instruction tuning with Web2Code improved performance on specialized tasks without affecting general capabilities. Why it matters: This contribution addresses a key limitation in current multimodal LLMs, potentially boosting productivity in web design and development by providing targeted training data.

A new standard for evaluating Arabic language models presented at ACL

MBZUAI · · NLP LLM

MBZUAI researchers have created ArabicMMLU, the first benchmark dataset in Modern Standard Arabic for evaluating language understanding across multiple tasks. The dataset contains over 14,000 multiple-choice questions from school exams across the Arabic-speaking world and addresses the limitations of translated English datasets. It was presented at the 62nd Annual Meeting of the Association for Computational Linguistics in Bangkok. Why it matters: This benchmark enables a more accurate and culturally relevant evaluation of LLMs' capabilities in Arabic, which is crucial for developing AI tailored to the Arab world.

Testing the limits of vision language models: A new benchmark dataset presented at ACL

MBZUAI · · CV NLP

MBZUAI researchers presented EXAMS-V, a new benchmark dataset for evaluating the reasoning and processing abilities of vision language models (VLMs). EXAMS-V contains over 20,000 multiple-choice questions across 26 subjects and 11 languages, including Arabic. The dataset presents the questions within images, testing the VLM's ability to integrate visual and textual information. Why it matters: This dataset fills a gap in VLM evaluation, providing a valuable resource for assessing and improving the multimodal reasoning capabilities of these models, particularly in diverse languages like Arabic.

ADAB: Arabic Dataset for Automated Politeness Benchmarking -- A Large-Scale Resource for Computational Sociopragmatics

arXiv · · NLP Arabic AI

The paper introduces ADAB (Arabic Politeness Dataset), a new annotated Arabic dataset for politeness detection collected from online platforms. The dataset covers Modern Standard Arabic and multiple dialects (Gulf, Egyptian, Levantine, and Maghrebi). It contains 10,000 samples across 16 politeness categories and achieves substantial inter-annotator agreement (kappa = 0.703). Why it matters: This dataset addresses the under-explored area of Arabic-language resources for politeness detection, which is crucial for culturally-aware NLP systems.

ArabJobs: A Multinational Corpus of Arabic Job Ads

arXiv · · NLP Arabic AI

The ArabJobs dataset is a new corpus of over 8,500 Arabic job advertisements collected from Egypt, Jordan, Saudi Arabia, and the UAE. The dataset contains over 550,000 words and captures linguistic, regional, and socio-economic variation in the Arab labor market. It is available on GitHub and can be used for fairness-aware Arabic NLP and labor market research.

A Tale of Two Scripts: Transliteration and Post-Correction for Judeo-Arabic

arXiv · · NLP Arabic AI

The paper introduces a two-step approach for transliterating Judeo-Arabic text (written in Hebrew script) into Arabic script. The method involves character-level mapping followed by post-correction to fix grammatical and orthographic errors. The authors also benchmarked LLMs on the transliteration task and demonstrate that transliteration enables the use of Arabic NLP tools on Judeo-Arabic. Why it matters: This work makes Judeo-Arabic texts more accessible to Arabic NLP, enabling processing and analysis that was previously impossible.

Proper Noun Diacritization for Arabic Wikipedia: A Benchmark Dataset

arXiv · · NLP Arabic AI

A new dataset for Arabic proper noun diacritization was introduced, addressing the ambiguity caused by undiacritized proper nouns in Arabic Wikipedia. The dataset includes manually diacritized Arabic proper nouns of various origins along with their English Wikipedia glosses. GPT-4o was benchmarked on the task of recovering full diacritization from undiacritized Arabic and English forms, achieving 73% accuracy. Why it matters: The release of this dataset should facilitate further research on Arabic Wikipedia proper noun diacritization, improving the accessibility and accuracy of Arabic NLP resources.

Arabic Diacritics in the Wild: Exploiting Opportunities for Improved Diacritization

arXiv · · NLP Arabic AI

The paper addresses the challenge of missing diacritics in Arabic NLP by exploring naturally occurring diacritics in a new dataset across six genres. It maps partially diacritized words to their full diacritization and proposes extensions to the analyze-and-disambiguate approach. The extended diacritization algorithm achieves notable improvements, and the code/datasets are released as open source. Why it matters: This research provides valuable resources and methods for improving Arabic text processing, especially in contexts where diacritization is crucial for accurate interpretation.

AraSpider: Democratizing Arabic-to-SQL

arXiv · · NLP Arabic AI

The study introduces AraSpider, the first Arabic version of the Spider dataset, to advance Arabic NLP. Four multilingual translation models and two text-to-SQL models (ChatGPT 3.5 and SQLCoder) were evaluated. Back translation significantly improved the performance of both ChatGPT 3.5 and SQLCoder on the AraSpider dataset. Why it matters: This work democratizes access to text-to-SQL resources for Arabic speakers and provides a methodology for translating datasets to other languages.

ALDi: Quantifying the Arabic Level of Dialectness of Text

arXiv · · NLP Arabic AI

The paper introduces the concept of Arabic Level of Dialectness (ALDi), a continuous variable representing the degree of dialectal Arabic in a sentence, arguing that Arabic exists on a spectrum between MSA and DA. They present the AOC-ALDi dataset, comprising 127,835 sentences manually labeled for dialectness level, derived from news articles and user comments. Experiments show a model trained on AOC-ALDi can identify dialectness levels across various corpora and genres. Why it matters: ALDi provides a more nuanced approach to analyzing Arabic text than binary dialect identification, enabling sociolinguistic analysis of stylistic choices.

TII-SSRC-23 Dataset: Typological Exploration of Diverse Traffic Patterns for Intrusion Detection

arXiv · · Research NLP

Researchers introduce TII-SSRC-23, a new network intrusion detection dataset designed to improve the diversity and representation of modern network traffic for machine learning models. The dataset includes a range of traffic types and subtypes to address the limitations of existing datasets. Feature importance analysis and baseline experiments for supervised and unsupervised intrusion detection are also provided.

Detecting Propaganda Techniques in Code-Switched Social Media Text

arXiv · · NLP Arabic AI

This paper introduces a new task: detecting propaganda techniques in code-switched text. The authors created and released a corpus of 1,030 English-Roman Urdu code-switched texts annotated with 20 propaganda techniques. Experiments show the importance of directly modeling multilinguality and using the right fine-tuning strategy for this task.

Masader Plus: A New Interface for Exploring +500 Arabic NLP Datasets

arXiv · · NLP Arabic AI

Researchers have developed Masader Plus, a web interface for browsing the Masader catalog of Arabic NLP datasets. The interface allows for data exploration, filtration, and API access to examine datasets. User interactions with the website are intended to provide a way to improve the dataset catalog itself. Why it matters: This interface lowers the barrier to entry for researchers seeking Arabic NLP datasets, facilitating more research in the field.

PDNS-Net: A Large Heterogeneous Graph Benchmark Dataset of Network Resolutions for Graph Learning

arXiv · · Research Dataset

The Qatar Computing Research Institute (QCRI) has introduced PDNS-Net, a large heterogeneous graph dataset for malicious domain classification, containing 447K nodes and 897K edges. It is significantly larger than existing heterogeneous graph datasets like IMDB and DBLP. Preliminary evaluations using graph neural networks indicate that further research is needed to improve model performance on large heterogeneous graphs. Why it matters: This dataset will enable researchers to develop and benchmark graph learning algorithms on a scale relevant to real-world cybersecurity applications, particularly for identifying and mitigating malicious online activity.

Masader: Metadata Sourcing for Arabic Text and Speech Data Resources

arXiv · · NLP Arabic AI

Researchers created Masader, the largest public catalog for Arabic NLP datasets, containing 200 datasets annotated with 25 attributes. They developed a metadata annotation strategy applicable to other languages. The paper highlights issues within current Arabic NLP datasets and suggests recommendations. Why it matters: This curated dataset directory helps lower the barrier to entry for Arabic NLP research and development.

Overview of the Arabic Sentiment Analysis 2021 Competition at KAUST

arXiv · · NLP Arabic AI

KAUST organized an Arabic Sentiment Analysis Challenge where participants developed ML models to classify tweets as positive, negative, or neutral. The competition used the ASAD dataset with 55K tweets for training, 20K for validation, and 20K for final evaluation. The full dataset of 100K labeled tweets has been released for public use.

ASAD: A Twitter-based Benchmark Arabic Sentiment Analysis Dataset

arXiv · · NLP Arabic AI

Researchers introduce ASAD, a new large-scale, high-quality Arabic Sentiment Analysis Dataset based on 95K tweets with positive, negative, and neutral labels. The dataset is launched with a competition sponsored by KAUST offering a total of 17000 USD in prizes. Baseline models are implemented and results reported to provide a reference for competition participants.

AlexU-Word: A New Dataset for Isolated-Word Closed-Vocabulary Offline Arabic Handwriting Recognition

arXiv · · NLP Arabic AI

Researchers from Alexandria University introduce AlexU-Word, a new dataset for offline Arabic handwriting recognition. The dataset contains 25,114 samples of 109 unique Arabic words, covering all letter shapes, collected from 907 writers. The dataset is designed for closed-vocabulary word recognition and to support segmented letter recognition-based systems. Why it matters: This dataset can help advance Arabic handwriting recognition systems, addressing a need for high-quality Arabic datasets in NLP research.

Window-Based Descriptors for Arabic Handwritten Alphabet Recognition: A Comparative Study on a Novel Dataset

arXiv · · NLP CV

This paper introduces a novel dataset for Arabic handwritten isolated alphabet letters to serve as a benchmark for future research. The study presents a comparative evaluation of window-based descriptors for Arabic handwritten alphabet recognition, testing different descriptors with various classifiers. The experiments demonstrate that window-based descriptors perform well, especially when combined with a novel spatial pyramid partitioning scheme. Why it matters: The new dataset and analysis of descriptors will help advance Arabic OCR and handwritten text recognition systems.

Navigating NLP for Underrepresented Languages: Dataset Challenges, Efficient Techniques, and Evaluations

MBZUAI · · NLP Research

MBZUAI's Dr. Fajri Koto presented research on overcoming challenges in NLP for underrepresented languages. His work includes creating multilingual datasets for Indonesian languages by engaging native speakers and finding that direct composition yields better results than translation. He also discussed vocabulary adaptation and zero-shot learning to address computational resource limitations, and emphasized the importance of datasets with local context for evaluating LLMs. Why it matters: This research addresses critical gaps in NLP for low-resource languages, providing insights and techniques to improve performance and cultural relevance in multilingual AI models within the region and globally.

Faculty win EACL 2023 outstanding paper

MBZUAI · · NLP Research

MBZUAI faculty Alham Fikri Aji, Timothy Baldwin, and Fajri Koto won an Outstanding Paper Award at EACL 2023 for their paper "NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages." The paper introduces the first parallel resource for 10 Indonesian low-resource languages to boost performance in sentiment analysis and machine translation. The dataset is available on HuggingFace. Why it matters: This work highlights MBZUAI's commitment to advancing NLP research in low-resource languages, which can help preserve linguistic diversity and improve access to digital resources for speakers of underrepresented languages.

High-quality Neural Reconstruction in Real-world Scenes

MBZUAI · · CV Robotics

A researcher at the University of Oxford presented new findings on 3D neural reconstruction. The talk introduced a dataset comprising real-world video captures with perfect 3D models. A novel joint optimization method refines camera poses during the reconstruction process. Why it matters: High-quality 3D reconstruction has broad applicability to robotics and computer vision applications in the region.