Carlos Duarte, a professor of Marine Science at KAUST, discusses climate change adaptation and mitigation. He was interviewed outside the KAUST Museum of Science and Technology. The interview is part of a Frontiers Research Topic on Climate Change Adaptation and Mitigation. Why it matters: This highlights KAUST's focus on addressing climate change through scientific research and its engagement with international platforms like Frontiers.
This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.
MBZUAI and RIKEN-AIP (Japan) co-hosted a joint workshop at MBZUAI's Masdar City campus. The workshop facilitated the sharing of research and perspectives across machine learning, computer vision, and natural language processing. Researchers from both institutions explored interdisciplinary cooperation to enhance AI's capacity to address real-world problems. Why it matters: This collaboration strengthens MBZUAI's position as a hub for cross-disciplinary AI research and fosters international partnerships in the field.
This paper introduces Adaptive Entropy-aware Optimization (AEO), a new framework to tackle Multimodal Open-set Test-time Adaptation (MM-OSTTA). AEO uses Unknown-aware Adaptive Entropy Optimization (UAE) and Adaptive Modality Prediction Discrepancy Optimization (AMP) to distinguish unknown class samples during online adaptation by amplifying the entropy difference between known and unknown samples. The study establishes a new benchmark derived from existing datasets with five modalities and evaluates AEO's performance across various domain shift scenarios, demonstrating its effectiveness in long-term and continual MM-OSTTA settings.
The paper introduces Yet another Policy Optimization (YaPO), a reference-free method for learning sparse steering vectors in the latent space of a Sparse Autoencoder (SAE) to steer LLMs. By optimizing sparse codes, YaPO produces disentangled, interpretable, and efficient steering directions. Experiments show YaPO converges faster, achieves stronger performance, exhibits improved training stability and preserves general knowledge compared to dense steering baselines.