This paper analyzes the impact of device uncertainties on deep neural networks (DNNs) in emerging device-based Computing-in-memory (CiM) systems. The authors propose UAE, an uncertainty-aware Neural Architecture Search scheme, to identify DNN models robust to these uncertainties. The goal is to mitigate accuracy drops when deploying trained models on real-world platforms.
KAUST Professor Raul Tempone, an expert in Uncertainty Quantification (UQ), has been appointed as an Alexander von Humboldt Professor at RWTH Aachen University in Germany. This professorship will enable him to further his research on mathematics for uncertainty quantification with new collaborators. Tempone believes the KAUST Strategic Initiative for Uncertainty Quantification (SRI-UQ) contributed to this award. Why it matters: This appointment enhances KAUST's visibility and facilitates cross-fertilization between European and KAUST research groups, benefiting both institutions and attracting talent.
MBZUAI Professor Kun Zhang's research focuses on causality in AI systems, aiming to understand underlying processes beyond data correlation. He emphasizes the importance of causality and graphical representations to model why systems produce observations and account for uncertainty. Zhang served as a program chair at the 38th Conference on Uncertainty in Artificial Intelligence (UAI) in Eindhoven. Why it matters: This highlights the growing importance of causality and uncertainty in AI research, crucial for responsible AI deployment and decision-making in the region.
MBZUAI's Maxim Panov is developing uncertainty quantification methods to improve the reliability of language models. His work focuses on providing insights into the confidence level of machine learning models' predictions, especially in scenarios where accuracy is critical, such as medicine. Panov is working on post-processing techniques that can be applied to already-trained models. Why it matters: This research aims to address the issue of "hallucinations" in language models, enhancing their trustworthiness and applicability in sensitive domains within the region and globally.
Munther Dahleh from MIT gave a talk on information design under uncertainty, focusing on the challenges of creating an information marketplace. The talk addressed the externality faced by firms when information is allocated to competitors, and considered two models for this externality. The presentation included mechanisms for both models and highlighted the impact of competition on the revenue collected by the seller. Why it matters: The research advances understanding of information markets and mechanism design, relevant to the growing data economy in the GCC region.
Dr. Maxim Panov from TII Abu Dhabi will give a talk on uncertainty estimation in neural networks, covering model calibration, ensemble methods, and Bayesian approaches. The talk will focus on efficient single-network methods for quantifying prediction confidence, without requiring ensembles or major training changes. Panov's background includes experience at Skolkovo Institute of Science and Technology and DATADVANCE Company. Why it matters: Improving uncertainty estimation is crucial for deploying reliable AI systems in critical applications across the GCC region.
This paper introduces a Bayesian optimization method for estimating tire parameters and their uncertainty, addressing a gap in existing literature. The methodology uses Stochastic Variational Inference to estimate parameters and uncertainties, and it is validated against a Nelder-Mead algorithm. The approach is applied to real-world data from the Abu Dhabi Autonomous Racing League, revealing uncertainties in identifying curvature and shape parameters due to insufficient excitation. Why it matters: The research provides a practical tool for assessing tire model parameters in real-world conditions, with implications for autonomous racing and vehicle dynamics modeling in the GCC region.
MBZUAI's Dr. Artem Shelmanov is working on uncertainty quantification (UQ) methods for generative LLMs to detect unreliable generations. He aims to address the issue of LLMs fabricating facts, often called "hallucinating", without clear indicators of veracity. He systemizes existing UQ efforts, discusses caveats, and suggests novel techniques for safer LLM use. Why it matters: Improving the reliability of LLMs is crucial for responsible AI deployment in the region, especially in sensitive applications.