MBZUAI researchers co-led a study published in Nature demonstrating that GluFormer, an AI foundation model trained on continuous glucose monitoring (CGM) data, more accurately predicts long-term diabetes and cardiovascular risk than current clinical standards. GluFormer, built on a transformer architecture and trained using NVIDIA AI infrastructure on over 10 million CGM measurements, forecasts individual health risks using short-term glucose dynamics. In a 12-year follow-up, the model captured 66% of new-onset diabetes cases and 69% of cardiovascular-death events in its highest-risk group, outperforming established CGM-derived metrics across 19 external cohorts. Why it matters: The development of GluFormer represents a significant advancement in personalized healthcare, enabling proactive and individualized health strategies through the analysis of dynamic glucose data.
Researchers introduce TomFormer, a transformer-based model for accurate and early detection of tomato leaf diseases, with the goal of deployment on the Hello Stretch robot for real-time diagnosis. TomFormer combines a visual transformer and CNN, achieving state-of-the-art results on KUTomaDATA, PlantDoc, and PlantVillage datasets. KUTomaDATA was collected from a greenhouse in Abu Dhabi, UAE.
Dr. Yves Agid from the ICM Paris Institute of Translational Neuroscience lectured at KAUST's 2018 Winter Enrichment Program about the role of glial cells in brain function and behavior. He highlighted that glial cells, often overlooked in research, are crucial for neural synchronization and overall intelligence. Dysfunction of glial cells can induce pathologies like Alzheimer's and Parkinson's disease. Why it matters: The lecture underscored the importance of studying glial cells in addition to neurons for understanding and treating neurodegenerative disorders, which could influence future research directions at KAUST and in the region.
KAUST researchers have identified a protein complex of HuR and YB1 that stabilizes messenger RNA during muscle-fiber formation. The complex protects RNA as it carries muscle-forming code through the cell. Further research aims to elucidate the individual roles of each protein in the stabilization process. Why it matters: Understanding this RNA-stabilizing complex could lead to new therapies for muscle recovery and the prevention of muscle-related pathologies.
Giovanni Puccetti from ISTI-CNR presented research on linguistic probing of language models like BERT and RoBERTa. The research investigates the ability of these models to encode linguistic properties, linking this ability to outlier parameters. Preliminary work on fine-tuning LLMs in Italian and detecting synthetic news generation was also presented. Why it matters: Understanding the inner workings and linguistic capabilities of LLMs is crucial for improving their reliability and adapting them to diverse languages like Arabic.
This paper introduces a convolutional transformer model for classifying tomato maturity, along with a new UAE-sourced dataset, KUTomaData, for training segmentation and classification models. The model combines CNNs and transformers and was tested against two public datasets. Results showed state-of-the-art performance, outperforming existing methods by significant margins in mAP scores across all three datasets.
Marcus Engsig from DERC will present a paper at the MATLAB User Group Meeting in Abu Dhabi on October 6. The paper, titled ‘Generalization of Higher Order Methods For Fast Iterative Matrix Inversion Compatible With GPU Acceleration’, discusses a novel approach to matrix inversion using GPUs. The method, named Nested Neumann, achieves 4-100x acceleration compared to standard MATLAB methods for large matrices. Why it matters: This research contributes to faster computation in numerical and physical modeling, crucial for processing large datasets in various scientific and engineering applications in the region.
Pascal Fua from EPFL presented an approach to implementing convolutional neural nets that output complex 3D surface meshes. The method overcomes limitations in converting implicit representations to explicit surface representations. Applications include single view reconstruction, physically-driven shape optimization, and bio-medical image segmentation. Why it matters: This research advances geometric deep learning by enabling end-to-end trainable models for 3D surface mesh generation, with potential impact on various applications in computer vision and biomedical imaging in the region.