This research introduces a novel method using the Lateral Accretive Hybrid Network (LEARNet) to capture and analyze micro-expressions for mental health applications. The method refines both broad and subtle facial cues to detect mental health conditions like anxiety or depression. The authors also propose a neural architecture search (NAS) strategy to design a compact CNN for micro-expression recognition, improving performance and resource use. Why it matters: By integrating micro-emotion recognition with mental health estimation, the approach enables more accurate and early detection of emotional and mental health issues, potentially leading to improved well-being.
This paper introduces a deep learning framework for automated pain-level detection, designed for deployment in the UAE healthcare system. The system aims to assist in patient-centric pain management and diagnosis support, particularly relevant in situations with medical staff shortages. The research assesses the framework's performance using common approaches, indicating its potential for accurate pain level identification.
This study investigates the ability of six large language models, including Jais, Mistral, and GPT-4o, to mimic human emotional expression in English and personality markers in Arabic. The researchers evaluated whether machine classifiers could distinguish between human-authored and AI-generated texts and assessed the emotional/personality traits exhibited by the LLMs. Results indicate that AI-generated texts are distinguishable from human-authored ones, with classification performance impacted by paraphrasing, and that LLMs encode affective signals differently than humans. Why it matters: The findings have implications for authorship attribution, affective computing, and the responsible deployment of AI, especially in under-resourced languages like Arabic.
KAUST Associate Professor Xiangliang Zhang is using machine learning to analyze social media posts on Twitter related to COVID-19. Her team at KAUST's Computational Bioscience Research Center is analyzing sentiment in tweets using hashtags like #coronavirus and #covid19. Zhang aims to use this data to help predict localized outbreaks and provide an early warning system for governments and organizations. Why it matters: This research demonstrates the potential of AI-powered sentiment analysis to support public health efforts and inform decision-making during pandemics in the Middle East and globally.
Researchers from MBZUAI and Monash University presented a study at EMNLP 2024 examining LLMs' ability to interpret empathy, emotion, and morality in written stories. The study builds on a framework for modeling empathic similarity between narratives, using the EmpathicStories dataset. They are exploring ways to improve LLMs' capabilities with complex concepts like empathy, especially for applications in fields like healthcare. Why it matters: Enhancing LLMs with empathic understanding could lead to more effective and human-centered AI applications, particularly in sensitive domains requiring nuanced communication.
This article previews a talk by Gül Varol from Ecole des Ponts ParisTech on bridging natural language and 3D human motions. The talk will cover text-to-motion synthesis using generative models and text-to-motion retrieval models based on the ACTOR, TEMOS, TMR, TEACH, and SINC papers. Varol's research interests include video representation learning, human motion synthesis, and sign languages. Why it matters: Research in this area could enable more intuitive human-computer interaction and new applications in areas like virtual reality and robotics.
Pong C Yuen from Hong Kong Baptist University will present a talk on remote photoplethysmography (rPPG) detection. The talk will review the development of rPPG detection, share recent research, and discuss future directions. rPPG is a technology for non-contact computer vision and healthcare applications like heart rate estimation. Why it matters: Advancements in rPPG could enable new remote patient monitoring and diagnostic tools in the region, reducing the need for physical contact.
Ekaterina Radionova from Smarter AI (formerly Samsung AI Center) presented an approach to generating lifelike real-time avatars. The work focuses on generating high-quality video with authentic facial features to support online generation. Radionova's master's degree is from Skoltech on Data Science program and Bachelor degree at Moscow Institute of Physics and Technology on Applied Math. Why it matters: Achieving realistic real-time avatars is critical for applications in online communication, entertainment, and virtual reality within the region.