Skip to content
GCC AI Research

Separating fact from fiction with uncertainty quantification

MBZUAI · Notable

Summary

MBZUAI's Maxim Panov is developing uncertainty quantification methods to improve the reliability of language models. His work focuses on providing insights into the confidence level of machine learning models' predictions, especially in scenarios where accuracy is critical, such as medicine. Panov is working on post-processing techniques that can be applied to already-trained models. Why it matters: This research aims to address the issue of "hallucinations" in language models, enhancing their trustworthiness and applicability in sensitive domains within the region and globally.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

The role of data-driven models in quantifying uncertainty

KAUST ·

KAUST Professor Raul Tempone, an expert in Uncertainty Quantification (UQ), has been appointed as an Alexander von Humboldt Professor at RWTH Aachen University in Germany. This professorship will enable him to further his research on mathematics for uncertainty quantification with new collaborators. Tempone believes the KAUST Strategic Initiative for Uncertainty Quantification (SRI-UQ) contributed to this award. Why it matters: This appointment enhances KAUST's visibility and facilitates cross-fertilization between European and KAUST research groups, benefiting both institutions and attracting talent.

Advances in uncertainty quantification methods

KAUST ·

KAUST hosted the Advances in Uncertainty Quantification Methods, Algorithms and Applications conference (UQAW2016) in January 2016. The event featured 75 presentations and 20 invited speakers from various countries. Professor Raul Tempone presented research on computational approaches to fouling accumulation and wear degradation using stochastic differential equations. Why it matters: This work provides a new computational approach based on stochastic differential equations to predict fouling patterns of heat exchangers which can optimize maintenance operations and reduce engine shut-down periods.

Truth from uncertainty: using AI’s internal signals to spot hallucinations

MBZUAI ·

Researchers from MBZUAI developed "uncertainty quantification heads" (UQ heads) to detect hallucinations in language models by probing internal states and estimating the credibility of generated text. UQ heads leverage attention maps and logits to identify potential hallucinations without altering the model's generation process or relying on external knowledge. The team found that UQ heads achieved state-of-the-art performance in claim-level hallucination detection across different domains and languages. Why it matters: This approach offers a more efficient and accurate method for identifying hallucinations, improving the reliability and trustworthiness of language models in various applications.

Safety of Deploying NLP Models: Uncertainty Quantification of Generative LLMs

MBZUAI ·

MBZUAI's Dr. Artem Shelmanov is working on uncertainty quantification (UQ) methods for generative LLMs to detect unreliable generations. He aims to address the issue of LLMs fabricating facts, often called "hallucinating", without clear indicators of veracity. He systemizes existing UQ efforts, discusses caveats, and suggests novel techniques for safer LLM use. Why it matters: Improving the reliability of LLMs is crucial for responsible AI deployment in the region, especially in sensitive applications.