Skip to content
GCC AI Research

Search

Results for "SkinGPT-4"

Taqyim: Evaluating Arabic NLP Tasks Using ChatGPT Models

arXiv ·

This paper evaluates the performance of GPT-3.5 and GPT-4 on seven Arabic NLP tasks including sentiment analysis, translation, and diacritization. GPT-4 outperforms GPT-3.5 on most tasks. The study provides an analysis of sentiment analysis and introduces a Python interface, Taqyim, for evaluating Arabic NLP tasks. Why it matters: The evaluation of LLMs on Arabic NLP tasks helps to identify strengths and weaknesses, guiding future research and development efforts in the field.

Language and Planning in Robotic Navigation: A Multilingual Evaluation of State-of-the-Art Models

arXiv ·

This paper introduces Arabic language integration into Vision-and-Language Navigation (VLN) in robotics, evaluating multilingual SLMs like GPT-4o mini, Llama 3 8B, Phi-3 14B, and Jais using the NavGPT framework. The study uses the R2R dataset to assess the impact of language on navigation reasoning through zero-shot sequential action prediction. Results show the framework enables high-level planning in both English and Arabic, though some models face challenges with Arabic due to reasoning limitations and parsing issues. Why it matters: This work highlights the need to improve language model planning and reasoning for effective navigation, especially to unlock the potential of Arabic-language models in real-world applications.

XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models

arXiv ·

MBZUAI researchers introduce XrayGPT, a conversational medical vision-language model for analyzing chest radiographs and answering open-ended questions. The model aligns a medical visual encoder (MedClip) with a fine-tuned large language model (Vicuna) using a linear transformation. To enhance performance, the LLM was fine-tuned using 217k interactive summaries generated from radiology reports.

Integrating Micro-Emotion Recognition with Mental Health Estimation for Improved Well-being

MBZUAI ·

This research introduces a novel method using the Lateral Accretive Hybrid Network (LEARNet) to capture and analyze micro-expressions for mental health applications. The method refines both broad and subtle facial cues to detect mental health conditions like anxiety or depression. The authors also propose a neural architecture search (NAS) strategy to design a compact CNN for micro-expression recognition, improving performance and resource use. Why it matters: By integrating micro-emotion recognition with mental health estimation, the approach enables more accurate and early detection of emotional and mental health issues, potentially leading to improved well-being.

SDXL Finetuned with LoRA for Coloring Therapy: Generating Graphic Templates Inspired by United Arab Emirates Culture

arXiv ·

This paper introduces a method using Stable Diffusion XL (SDXL) fine-tuned with LoRA to generate culturally relevant coloring templates based on Emirati Al-Sadu weaving patterns for mental health therapy. The approach aims to leverage coloring therapy's stress-relieving benefits while embedding cultural resonance, potentially aiding in the treatment of Generalized Anxiety Disorder (GAD). Future research will explore the impact of Emirati heritage art on Emirati individuals using biosignals to assess engagement and effectiveness.

ArabianGPT: Native Arabic GPT-based Large Language Model

arXiv ·

The paper introduces ArabianGPT, a suite of transformer-based language models designed specifically for Arabic, including versions with 0.1B and 0.3B parameters. A key component is the AraNizer tokenizer, tailored for Arabic script's morphology. Fine-tuning ArabianGPT-0.1B achieved 95% accuracy in sentiment analysis, up from 56% in the base model, and improved F1 scores in summarization. Why it matters: The models address the gap in native Arabic LLMs, offering better performance on Arabic NLP tasks through tailored architecture and tokenization.

AraGPT2: Pre-Trained Transformer for Arabic Language Generation

arXiv ·

The paper introduces AraGPT2, a suite of pre-trained transformer models for Arabic language generation, with the largest model (AraGPT2-mega) containing 1.46 billion parameters. Trained on a large Arabic corpus of internet text and news, AraGPT2-mega demonstrates strong performance in synthetic news generation and zero-shot question answering. To address the risk of misuse, the authors also released a discriminator model with 98% accuracy in detecting AI-generated text. Why it matters: This release of both the model and discriminator fills a critical gap in Arabic NLP and encourages further research and applications in the field.