Dr. Maxim Panov from TII Abu Dhabi will give a talk on uncertainty estimation in neural networks, covering model calibration, ensemble methods, and Bayesian approaches. The talk will focus on efficient single-network methods for quantifying prediction confidence, without requiring ensembles or major training changes. Panov's background includes experience at Skolkovo Institute of Science and Technology and DATADVANCE Company. Why it matters: Improving uncertainty estimation is crucial for deploying reliable AI systems in critical applications across the GCC region.
MBZUAI's Maxim Panov is developing uncertainty quantification methods to improve the reliability of language models. His work focuses on providing insights into the confidence level of machine learning models' predictions, especially in scenarios where accuracy is critical, such as medicine. Panov is working on post-processing techniques that can be applied to already-trained models. Why it matters: This research aims to address the issue of "hallucinations" in language models, enhancing their trustworthiness and applicability in sensitive domains within the region and globally.
A recent study questions the necessity of deep ensembles, which improve accuracy and match larger models. The study demonstrates that ensemble diversity does not meaningfully improve uncertainty quantification on out-of-distribution data. It also reveals that the out-of-distribution performance of ensembles is strongly determined by their in-distribution performance. Why it matters: The findings suggest that larger, single neural networks can replicate the benefits of deep ensembles, potentially simplifying model deployment and reducing computational costs in the region.
This paper analyzes the impact of device uncertainties on deep neural networks (DNNs) in emerging device-based Computing-in-memory (CiM) systems. The authors propose UAE, an uncertainty-aware Neural Architecture Search scheme, to identify DNN models robust to these uncertainties. The goal is to mitigate accuracy drops when deploying trained models on real-world platforms.
MBZUAI's Dr. Artem Shelmanov is working on uncertainty quantification (UQ) methods for generative LLMs to detect unreliable generations. He aims to address the issue of LLMs fabricating facts, often called "hallucinating", without clear indicators of veracity. He systemizes existing UQ efforts, discusses caveats, and suggests novel techniques for safer LLM use. Why it matters: Improving the reliability of LLMs is crucial for responsible AI deployment in the region, especially in sensitive applications.