Skip to content
GCC AI Research

Search

Results for "identifiability"

Confidence sets for Causal Discovery

MBZUAI ·

A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.

Two weak assumptions, one strong result presented at ICLR

MBZUAI ·

MBZUAI researchers presented a new machine learning method at ICLR for uncovering hidden variables from observed data. The method, called "complementary gains," combines two weak assumptions to provide identifiability guarantees. This approach aims to recover true latent variables reflecting real-world processes, while solving problems efficiently. Why it matters: The research advances disentangled representation learning by finding minimal assumptions necessary for identifiability, improving the applicability of AI models to real-world data.

Gaussian Variational Inference in high dimension

MBZUAI ·

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

The complexities of identifying causality in the real world: A new study presented at ICML

MBZUAI ·

MBZUAI researchers presented a study at ICML 2024 examining how data aggregation distorts causal discovery. The study argues that current methods are misled because real-world interactions happen at a micro level while observations are aggregated. Using the example of ice cream sales and temperature, they highlight how aggregation introduces "instantaneous causality" where time-lags exist. Why it matters: The research identifies a fundamental limitation in current causal discovery methods, potentially impacting disciplines relying on accurate causal inference from observational data.

Making the invisible visible in causality: a new algorithm to identify causal graphs involving both observed and latent variables

MBZUAI ·

Researchers from MBZUAI presented a new algorithm at ICLR 2024 that identifies causal relationships involving both observed and latent variables. The algorithm addresses limitations of existing methods that struggle with latent variables or assume observed variables don't directly influence latent variables. The proposed algorithm can accommodate both scenarios, offering a more generalizable approach to causal discovery. Why it matters: This research advances the development of AI systems that can analyze complex data and identify causal relationships, with potential applications in fields like medicine where understanding causality is crucial for developing treatments and preventative measures.

Problems in network archaeology: root finding and broadcasting

MBZUAI ·

This article discusses a talk by Gábor Lugosi on "network archaeology," specifically the problems of root finding and broadcasting in large networks. The talk addresses discovering the past of dynamically growing networks when only a present-day snapshot is observed. Lugosi's research interests include machine learning theory, nonparametric statistics, and random structures. Why it matters: Understanding the evolution and origins of networks is crucial for various applications, including analyzing social networks, biological systems, and the spread of information.