Skip to content
GCC AI Research

Search

Results for "conformal prediction"

Distribution-Free Conformal Joint Prediction Regions for Neural Marked Temporal Point Processes

MBZUAI ·

A presentation will demonstrate the construction of well-calibrated, distribution-free neural Temporal Point Process (TPP) models from multiple event sequences using conformal prediction. The method builds a distribution-free joint prediction region for event arrival time and type with a finite-sample coverage guarantee. The refined method is based on the highest density regions, derived from the joint predictive density of event arrival time and type to address the challenge of creating a joint prediction region for a bivariate response that includes both continuous and discrete data types. Why it matters: This research from a KAUST postdoc improves uncertainty quantification in neural TPPs, which are crucial for modeling continuous-time event sequences, with applications in various fields, by providing more reliable prediction regions.

Uncertainty Estimation: Can your neural network provide confidence for its predictions?

MBZUAI ·

Dr. Maxim Panov from TII Abu Dhabi will give a talk on uncertainty estimation in neural networks, covering model calibration, ensemble methods, and Bayesian approaches. The talk will focus on efficient single-network methods for quantifying prediction confidence, without requiring ensembles or major training changes. Panov's background includes experience at Skolkovo Institute of Science and Technology and DATADVANCE Company. Why it matters: Improving uncertainty estimation is crucial for deploying reliable AI systems in critical applications across the GCC region.

CTRL: Closed-Loop Data Transcription via Rate Reduction

MBZUAI ·

A talk introduces a computational framework for learning a compact structured representation for real-world datasets, that is both discriminative and generative. It proposes to learn a closed-loop transcription between the distribution of a high-dimensional multi-class dataset and an arrangement of multiple independent subspaces, known as a linear discriminative representation (LDR). The optimality of the closed-loop transcription can be characterized in closed-form by an information-theoretic measure known as the rate reduction. Why it matters: The framework unifies concepts and benefits of auto-encoding and GAN and generalizes them to the settings of learning a both discriminative and generative representation for multi-class visual data.

Confidence sets for Causal Discovery

MBZUAI ·

A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.

Learning with Noisy Labels

MBZUAI ·

This article discusses methods for handling label noise in deep learning, including extracting confident examples and modeling label noise. Tongliang Liu from the University of Sydney presented these approaches. The talk aimed to provide participants with a basic understanding of learning with noisy labels. Why it matters: As AI models are increasingly trained on large, noisy datasets, techniques for robust learning become crucial for reliable real-world performance.

Separating fact from fiction with uncertainty quantification

MBZUAI ·

MBZUAI's Maxim Panov is developing uncertainty quantification methods to improve the reliability of language models. His work focuses on providing insights into the confidence level of machine learning models' predictions, especially in scenarios where accuracy is critical, such as medicine. Panov is working on post-processing techniques that can be applied to already-trained models. Why it matters: This research aims to address the issue of "hallucinations" in language models, enhancing their trustworthiness and applicability in sensitive domains within the region and globally.

Asymmetry Learning and OOD Robustness

MBZUAI ·

Bruno Ribeiro from Purdue University presented a talk on Asymmetry Learning and Out-of-Distribution (OOD) Robustness. The talk introduced Asymmetry Learning, a new paradigm that focuses on finding evidence of asymmetries in data to improve classifier performance in both in-distribution and out-of-distribution scenarios. Asymmetry Learning performs a causal structure search to find classifiers that perform well across different environments. Why it matters: This research addresses a key challenge in AI by proposing a novel approach to improve the reliability and generalization of classifiers in unseen environments, potentially leading to more robust AI systems.