Skip to content
GCC AI Research

Search

Results for "DLE"

Understanding the mixture of the expert layer in Deep Learning

MBZUAI ·

A Mixture of Experts (MoE) layer is a sparsely activated deep learning layer. It uses a router network to direct each token to one of the experts. Yuanzhi Li, an assistant professor at CMU and affiliated faculty at MBZUAI, researches deep learning theory and NLP. Why it matters: This highlights MBZUAI's engagement with cutting-edge deep learning research, specifically in efficient model design.

DERC’s Dr. Meixia Geng and Dr. Felix Vega to Present Research Papers at ILP 2023

TII ·

Researchers from the Directed Energy Research Center (DERC) will present research papers at the 17th Workshop of the International Lithosphere Program Task Force on Sedimentary Basins in Abu Dhabi. Dr. Meixia Geng's study identifies potential geothermal exploration sites in the UAE based on Curie isotherm depths. Dr. Felix Vega's research demonstrates drone-borne synthetic aperture radar (SAR) for subsurface mapping of underground cavities. Why it matters: These studies showcase the UAE's commitment to sustainable development through geothermal energy exploration and advanced subsurface imaging techniques.

LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection

arXiv ·

MBZUAI researchers release LLM-DetectAIve, a tool for fine-grained detection of machine-generated text across four categories: human-written, machine-generated, machine-written then humanized, and human-written then machine-polished. The tool aims to address concerns about misuse of LLMs, especially in education and academia, by identifying attempts to obfuscate or polish content. LLM-DetectAIve is publicly accessible with code and a demonstration video provided.

Nile-Chat: Egyptian Language Models for Arabic and Latin Scripts

arXiv ·

The authors introduce Nile-Chat, a collection of LLMs (4B, 3x4B-A6B, and 12B) specifically for the Egyptian dialect, capable of understanding and generating text in both Arabic and Latin scripts. A novel language adaptation approach using the Branch-Train-MiX strategy is used to merge script-specialized experts into a single MoE model. Nile-Chat models outperform multilingual and Arabic LLMs like LLaMa, Jais, and ALLaM on newly introduced Egyptian benchmarks, with the 12B model achieving a 14.4% performance gain over Qwen2.5-14B-Instruct on Latin-script benchmarks; all resources are publicly available. Why it matters: This work addresses the overlooked aspect of adapting LLMs to dual-script languages, providing a methodology for creating more inclusive and representative language models in the Arabic-speaking world.

Recent Advances in Deep Reinforcement Learning

MBZUAI ·

Keith Ross, Dean of Computer Science, Data Science and Engineering at NYU Shanghai, will be giving a talk on recent advances in Deep Reinforcement Learning (DRL). The talk will review DRL breakthroughs and discuss algorithmic research on DRL for high-dimensional state and action spaces, with applications to robotic locomotion. Ross's research interests include deep reinforcement learning, Internet privacy, peer-to-peer networking, and computer network modeling. Why it matters: Reinforcement learning is a core area of AI research in the GCC region, and a talk by a prominent researcher can help inform and inspire local researchers.