This paper introduces Saudi-Dialect-ALLaM, a LoRA fine-tuned version of the Saudi Arabian foundation model ALLaM-7B-Instruct-preview, designed to improve the generation of Saudi dialects (Najdi and Hijazi). The model is trained on a private dataset of 5,466 synthetic instruction-response pairs, with two variants explored: Dialect-Token and No-Token training. Results indicate that the Dialect-Token model achieves superior dialect control and fidelity compared to generic instruction models, although the dataset and model weights are not released.
This paper introduces Absher, a new benchmark for evaluating LLMs' linguistic and cultural competence in Saudi dialects. The benchmark comprises over 18,000 multiple-choice questions spanning six categories, using dialectal words, phrases, and proverbs from various regions of Saudi Arabia. Evaluation of state-of-the-art LLMs reveals performance gaps, especially in cultural inference and contextual understanding, highlighting the need for dialect-aware training.
KAUST researchers have developed a parameter-efficient learning approach to identify Arabic dialects using limited data and computing power, fine-tuning the Whisper model with a dataset of 17 dialects. The model achieves high accuracy using only 2.5% of the parameters of the larger model and 30% of the training data. Srijith Radhakrishnan presented the findings at EMNLP 2023 and Interspeech 2023. Why it matters: This research addresses the challenge of dialect identification in Arabic NLP and enables more efficient use of large language models in resource-constrained environments.