Skip to content
GCC AI Research

Search

Results for "conformal prediction"

Distribution-Free Conformal Joint Prediction Regions for Neural Marked Temporal Point Processes

MBZUAI ·

A presentation will demonstrate the construction of well-calibrated, distribution-free neural Temporal Point Process (TPP) models from multiple event sequences using conformal prediction. The method builds a distribution-free joint prediction region for event arrival time and type with a finite-sample coverage guarantee. The refined method is based on the highest density regions, derived from the joint predictive density of event arrival time and type to address the challenge of creating a joint prediction region for a bivariate response that includes both continuous and discrete data types. Why it matters: This research from a KAUST postdoc improves uncertainty quantification in neural TPPs, which are crucial for modeling continuous-time event sequences, with applications in various fields, by providing more reliable prediction regions.

Learning with Noisy Labels

MBZUAI ·

This article discusses methods for handling label noise in deep learning, including extracting confident examples and modeling label noise. Tongliang Liu from the University of Sydney presented these approaches. The talk aimed to provide participants with a basic understanding of learning with noisy labels. Why it matters: As AI models are increasingly trained on large, noisy datasets, techniques for robust learning become crucial for reliable real-world performance.

Uncertainty Estimation: Can your neural network provide confidence for its predictions?

MBZUAI ·

Dr. Maxim Panov from TII Abu Dhabi will give a talk on uncertainty estimation in neural networks, covering model calibration, ensemble methods, and Bayesian approaches. The talk will focus on efficient single-network methods for quantifying prediction confidence, without requiring ensembles or major training changes. Panov's background includes experience at Skolkovo Institute of Science and Technology and DATADVANCE Company. Why it matters: Improving uncertainty estimation is crucial for deploying reliable AI systems in critical applications across the GCC region.

Efficiently Approximating Equivariance in Unconstrained Models

MBZUAI ·

Ahmed Elhag, a PhD student at the University of Oxford, presented a new training procedure that approximates equivariance in unconstrained machine learning models via a multitask objective. The approach adds an equivariance loss to unconstrained models, allowing them to learn approximate symmetries without the computational cost of fully equivariant methods. Formulating equivariance as a flexible learning objective allows control over the extent of symmetry enforced, matching the performance of strictly equivariant baselines at a lower cost. Why it matters: This research from a speaker at MBZUAI balances rigorous theory and practical scalability in geometric deep learning, potentially accelerating drug discovery and design.

Performance Prediction via Bayesian Matrix Factorisation for Multilingual Natural Language Processing Tasks

MBZUAI ·

A new Bayesian matrix factorization approach is explored for performance prediction in multilingual NLP, aiming to reduce the experimental burden of evaluating various language combinations. The approach outperforms state-of-the-art methods in NLP benchmarks like machine translation and cross-lingual entity linking. It also avoids hyperparameter tuning and provides uncertainty estimates over predictions. Why it matters: Accurate performance prediction methods accelerate multilingual NLP research by reducing computational costs and improving experimental efficiency, especially valuable for Arabic NLP tasks.