Skip to content
GCC AI Research

Search

Results for "uncertainties"

Uncertainty Modeling of Emerging Device-based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

arXiv ·

This paper analyzes the impact of device uncertainties on deep neural networks (DNNs) in emerging device-based Computing-in-memory (CiM) systems. The authors propose UAE, an uncertainty-aware Neural Architecture Search scheme, to identify DNN models robust to these uncertainties. The goal is to mitigate accuracy drops when deploying trained models on real-world platforms.

Creating certainty through uncertainty

MBZUAI ·

MBZUAI Professor Kun Zhang's research focuses on causality in AI systems, aiming to understand underlying processes beyond data correlation. He emphasizes the importance of causality and graphical representations to model why systems produce observations and account for uncertainty. Zhang served as a program chair at the 38th Conference on Uncertainty in Artificial Intelligence (UAI) in Eindhoven. Why it matters: This highlights the growing importance of causality and uncertainty in AI research, crucial for responsible AI deployment and decision-making in the region.

The role of data-driven models in quantifying uncertainty

KAUST ·

KAUST Professor Raul Tempone, an expert in Uncertainty Quantification (UQ), has been appointed as an Alexander von Humboldt Professor at RWTH Aachen University in Germany. This professorship will enable him to further his research on mathematics for uncertainty quantification with new collaborators. Tempone believes the KAUST Strategic Initiative for Uncertainty Quantification (SRI-UQ) contributed to this award. Why it matters: This appointment enhances KAUST's visibility and facilitates cross-fertilization between European and KAUST research groups, benefiting both institutions and attracting talent.

Separating fact from fiction with uncertainty quantification

MBZUAI ·

MBZUAI's Maxim Panov is developing uncertainty quantification methods to improve the reliability of language models. His work focuses on providing insights into the confidence level of machine learning models' predictions, especially in scenarios where accuracy is critical, such as medicine. Panov is working on post-processing techniques that can be applied to already-trained models. Why it matters: This research aims to address the issue of "hallucinations" in language models, enhancing their trustworthiness and applicability in sensitive domains within the region and globally.

Confidence sets for Causal Discovery

MBZUAI ·

A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.

Uncertainty Estimation: Can your neural network provide confidence for its predictions?

MBZUAI ·

Dr. Maxim Panov from TII Abu Dhabi will give a talk on uncertainty estimation in neural networks, covering model calibration, ensemble methods, and Bayesian approaches. The talk will focus on efficient single-network methods for quantifying prediction confidence, without requiring ensembles or major training changes. Panov's background includes experience at Skolkovo Institute of Science and Technology and DATADVANCE Company. Why it matters: Improving uncertainty estimation is crucial for deploying reliable AI systems in critical applications across the GCC region.

Safety of Deploying NLP Models: Uncertainty Quantification of Generative LLMs

MBZUAI ·

MBZUAI's Dr. Artem Shelmanov is working on uncertainty quantification (UQ) methods for generative LLMs to detect unreliable generations. He aims to address the issue of LLMs fabricating facts, often called "hallucinating", without clear indicators of veracity. He systemizes existing UQ efforts, discusses caveats, and suggests novel techniques for safer LLM use. Why it matters: Improving the reliability of LLMs is crucial for responsible AI deployment in the region, especially in sensitive applications.

Truth from uncertainty: using AI’s internal signals to spot hallucinations

MBZUAI ·

Researchers from MBZUAI developed "uncertainty quantification heads" (UQ heads) to detect hallucinations in language models by probing internal states and estimating the credibility of generated text. UQ heads leverage attention maps and logits to identify potential hallucinations without altering the model's generation process or relying on external knowledge. The team found that UQ heads achieved state-of-the-art performance in claim-level hallucination detection across different domains and languages. Why it matters: This approach offers a more efficient and accurate method for identifying hallucinations, improving the reliability and trustworthiness of language models in various applications.