Skip to content
GCC AI Research

Search

Results for "LoRA"

SDXL Finetuned with LoRA for Coloring Therapy: Generating Graphic Templates Inspired by United Arab Emirates Culture

arXiv ·

This paper introduces a method using Stable Diffusion XL (SDXL) fine-tuned with LoRA to generate culturally relevant coloring templates based on Emirati Al-Sadu weaving patterns for mental health therapy. The approach aims to leverage coloring therapy's stress-relieving benefits while embedding cultural resonance, potentially aiding in the treatment of Generalized Anxiety Disorder (GAD). Future research will explore the impact of Emirati heritage art on Emirati individuals using biosignals to assess engagement and effectiveness.

Saudi-Dialect-ALLaM: LoRA Fine-Tuning for Dialectal Arabic Generation

arXiv ·

This paper introduces Saudi-Dialect-ALLaM, a LoRA fine-tuned version of the Saudi Arabian foundation model ALLaM-7B-Instruct-preview, designed to improve the generation of Saudi dialects (Najdi and Hijazi). The model is trained on a private dataset of 5,466 synthetic instruction-response pairs, with two variants explored: Dialect-Token and No-Token training. Results indicate that the Dialect-Token model achieves superior dialect control and fidelity compared to generic instruction models, although the dataset and model weights are not released.

Bactrian-X: Multilingual Replicable Instruction-Following Models with Low-Rank Adaptation

arXiv ·

MBZUAI releases Bactrian-X, a multilingual parallel dataset of 3.4 million instruction-response pairs across 52 languages. They trained low-rank adaptation (LoRA) adapters using this dataset, creating lightweight, replaceable components for large language models. Experiments show the LoRA-based models outperform vanilla and existing instruction-tuned models in multilingual settings.

Resource-Aware Arabic LLM Creation: Model Adaptation, Integration, and Multi-Domain Testing

arXiv ·

Researchers fine-tuned the Qwen2-1.5B model for Arabic using QLoRA on a 4GB VRAM system, using datasets like Bactrian and Arabic Wikipedia. They addressed challenges in Arabic NLP including morphology and dialectal variations. After 10,000 training steps, the final loss converged to 0.1083 with improved handling of Arabic-specific linguistic phenomena. Why it matters: This demonstrates a resource-efficient approach for creating specialized Arabic language models, democratizing access to advanced NLP technologies.