Researchers at MBZUAI release SlimPajama-DC, an empirical analysis of data combinations for pretraining LLMs using the SlimPajama dataset. The study examines the impact of global vs. local deduplication and the proportions of highly-deduplicated multi-source datasets. Results show that increased data diversity after global deduplication is crucial, with the best configuration outperforming models trained on RedPajama.
Researchers at MBZUAI have developed DynaMMo, a dynamic model merging method for efficient class incremental learning using medical images. DynaMMo merges multiple networks at different training stages using lightweight learnable modules, reducing computational overhead. Evaluated on three datasets, DynaMMo achieved a 10-fold reduction in GFLOPS compared to existing dynamic methods with a 2.76 average accuracy drop.
The paper introduces Duet, a hybrid neural relation understanding method for cardinality estimation. Duet addresses limitations of existing learned methods, such as high costs and scalability issues, by incorporating predicate information into an autoregressive model. Experiments demonstrate Duet's efficiency, accuracy, and scalability, even outperforming GPU-based methods on CPU.
The paper introduces a novel actor-critic framework called Distillation Policy Optimization that combines on-policy and off-policy data for reinforcement learning. It incorporates variance reduction mechanisms like a unified advantage estimator (UAE) and a residual baseline. The empirical results demonstrate improved sample efficiency for on-policy algorithms, bridging the gap with off-policy methods.
The paper introduces Sadeed, a fine-tuned decoder-only language model based on the Kuwain 1.5B Hennara model, for improved Arabic text diacritization. Sadeed is fine-tuned on high-quality diacritized datasets and achieves competitive results compared to larger proprietary models. The authors also introduce SadeedDiac-25, a new benchmark for fairer evaluation of Arabic diacritization across diverse text genres. Why it matters: This work advances Arabic NLP by providing both a competitive diacritization model and a more robust evaluation benchmark, facilitating further research and development in the field.
A new mini-batch strategy using aggregated relational data is proposed to fit the mixed membership stochastic blockmodel (MMSB) to large networks. The method uses nodal information and stochastic gradients of bipartite graphs for scalable inference. The approach was applied to a citation network with over two million nodes and 25 million edges, capturing explainable structure. Why it matters: This research enables more efficient community detection in massive networks, which is crucial for analyzing complex relationships in various domains, but this article has no clear connection to the Middle East.
This paper introduces an enhanced Dense Passage Retrieval (DPR) framework tailored for Arabic text retrieval. The core innovation is an Attentive Relevance Scoring (ARS) mechanism that improves semantic relevance modeling between questions and passages, replacing standard interaction methods. The method integrates pre-trained Arabic language models and architectural refinements, achieving improved retrieval and ranking accuracy for Arabic question answering. Why it matters: This work addresses the underrepresentation of Arabic in NLP research by providing a novel approach and publicly available code to improve Arabic text retrieval, which can benefit various applications like Arabic search engines and question-answering systems.