Skip to content
GCC AI Research

Search

Results for "NLP models"

Modeling Text as a Living Object

MBZUAI ·

The InterText project, funded by the European Research Council, aims to advance NLP by developing a framework for modeling fine-grained relationships between texts. This approach enables tracing the origin and evolution of texts and ideas. Iryna Gurevych from the Technical University of Darmstadt presented the intertextual approach to NLP, covering data modeling, representation learning, and practical applications. Why it matters: This research could enable a new generation of AI applications for text work and critical reading, with potential applications in collaborative knowledge construction and document revision assistance.

NLP meets Psychotherapy: from Estimating Depression Severity to Estimating the Client’s Well-Being

MBZUAI ·

A talk will present two projects related to the use of NLP for estimating a client’s depression severity and well-being. The first project examines emotional coherence between the subjective experience of emotions and emotion expression in therapy using transformer-based emotion recognition models. The second project proposes a semantic pipeline to study depression severity in individuals based on their social media posts by exploring different aggregation methods to answer one of four Beck Depression Inventory (BDI) options per symptom. Why it matters: This research explores how NLP techniques can be applied to mental health assessment, potentially offering new tools for diagnosis and treatment monitoring.

Parameter-Efficient Fine-Tuning for NLP Models

MBZUAI ·

The article discusses parameter-efficient fine-tuning methods for large NLP models, highlighting their importance due to the increasing size and computational demands of state-of-the-art language models. It provides an overview of these methods, presenting them in a unified view to emphasize their similarities and differences. Indraneil, a PhD candidate at TU Darmstadt's UKP Lab, is researching parameter-efficient fine-tuning, sparsity, and conditional computation methods to improve LLM performance in multilingual, multi-task settings. Why it matters: Efficient fine-tuning techniques are crucial for democratizing access to and accelerating the deployment of large language models in the region and beyond.

NYUAD and MBZUAI co-host EMNLP

MBZUAI ·

NYUAD and MBZUAI co-hosted the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP) in Abu Dhabi from December 7-11. EMNLP is a top-tier NLP and AI conference organized by the ACL special interest group on linguistic data (SIGDAT). MBZUAI's Natural Language Processing Department is actively developing NLP datasets and methods to solve social problems. Why it matters: Hosting EMNLP in the UAE highlights the growing importance of NLP research in the region and the increasing contributions of local institutions like MBZUAI to the field.

A Panoramic Survey of Natural Language Processing in the Arab World

arXiv ·

This survey paper reviews the landscape of Natural Language Processing (NLP) research and applications in the Arab world. It discusses the unique challenges posed by the Arabic language, such as its morphological complexity and dialectal diversity. The paper also presents a historical overview of Arabic NLP and surveys various research areas, including machine translation, sentiment analysis, and speech recognition. Why it matters: The survey provides a comprehensive resource for researchers and practitioners interested in the current state and future directions of Arabic NLP, a field critical for enabling AI technologies to serve Arabic-speaking communities.

Transformer Models: from Linguistic Probing to Outlier Weights

MBZUAI ·

Giovanni Puccetti from ISTI-CNR presented research on linguistic probing of language models like BERT and RoBERTa. The research investigates the ability of these models to encode linguistic properties, linking this ability to outlier parameters. Preliminary work on fine-tuning LLMs in Italian and detecting synthetic news generation was also presented. Why it matters: Understanding the inner workings and linguistic capabilities of LLMs is crucial for improving their reliability and adapting them to diverse languages like Arabic.

Detect – Verify – Communicate: Combating Misinformation with More Realistic NLP

MBZUAI ·

Iryna Gurevych from TU Darmstadt discussed challenges in using NLP for misinformation detection, highlighting the gap between current fact-checking research and real-world scenarios. Her team is working on detecting emerging misinformation topics and has constructed two corpora for fact checking using larger evidence documents. They are also collaborating with cognitive scientists to detect and respond to vaccine hesitancy using effective communication strategies. Why it matters: Addressing misinformation is crucial in the Middle East, especially regarding public health and socio-political issues, making advancements in NLP-based fact-checking highly relevant.

Performance Prediction via Bayesian Matrix Factorisation for Multilingual Natural Language Processing Tasks

MBZUAI ·

A new Bayesian matrix factorization approach is explored for performance prediction in multilingual NLP, aiming to reduce the experimental burden of evaluating various language combinations. The approach outperforms state-of-the-art methods in NLP benchmarks like machine translation and cross-lingual entity linking. It also avoids hyperparameter tuning and provides uncertainty estimates over predictions. Why it matters: Accurate performance prediction methods accelerate multilingual NLP research by reducing computational costs and improving experimental efficiency, especially valuable for Arabic NLP tasks.