Skip to content
GCC AI Research

Search

Results for "dependence measure"

New test that recovers hidden relationships in data to be presented at ICLR

MBZUAI ·

MBZUAI researchers developed a new conditional independence test (DCT) that determines the dependence of two variables when both are discrete, continuous, or when one is discrete and the other is continuous. The new test addresses cases where variables are inherently continuous but represented in discretized form due to data collection limits. The findings will be presented at the 13th International Conference on Learning Representations (ICLR) in Singapore. Why it matters: This research addresses a fundamental problem in machine learning and statistics, improving causal relationship discovery in mixed datasets common across finance, public health, and other fields.

Confidence sets for Causal Discovery

MBZUAI ·

A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.

Rare and revealing: A new method for uncovering hidden patterns in data

MBZUAI ·

MBZUAI researchers have developed a new kernel-based method to identify dependence patterns in data, especially in small regions exhibiting 'rare dependence' where relationships between variables differ. The method uses sample importance reweighting, assigning more importance to regions with rare dependence. Tested on synthetic and real-world data, the algorithm successfully identified relations between variables even with rare dependence, outperforming traditional methods like HSIC. Why it matters: This advancement can improve data analysis in fields like public health, economics, genomics, and AI, enabling more accurate insights from complex observational data.

Point correlations for graphics, vision and machine learning

MBZUAI ·

The article discusses the importance of sample correlations in computer graphics, vision, and machine learning, highlighting how tailored randomness can improve the efficiency of existing models. It covers various correlations studied in computer graphics and tools to characterize them, including the use of neural networks for developing different correlations. Gurprit Singh from the Max Planck Institute for Informatics will be presenting on the topic. Why it matters: Optimizing sampling techniques via understanding and applying correlations can lead to significant advancements and efficiency gains across multiple AI fields.

Information Design under Uncertainty

MBZUAI ·

Munther Dahleh from MIT gave a talk on information design under uncertainty, focusing on the challenges of creating an information marketplace. The talk addressed the externality faced by firms when information is allocated to competitors, and considered two models for this externality. The presentation included mechanisms for both models and highlighted the impact of competition on the revenue collected by the seller. Why it matters: The research advances understanding of information markets and mechanism design, relevant to the growing data economy in the GCC region.

Learning to act in noisy contexts using deep proxy learning

MBZUAI ·

Researchers are exploring methods for evaluating the outcome of actions using off-policy observations where the context is noisy or anonymized. They employ proxy causal learning, using two noisy views of the context to recover the average causal effect of an action without explicitly modeling the hidden context. The implementation uses learned neural net representations for both action and context, and demonstrates outperformance compared to an autoencoder-based alternative. Why it matters: This research addresses a key challenge in applying AI in real-world scenarios where data privacy or bandwidth limitations necessitate working with noisy or anonymized data.

Gaussian Variational Inference in high dimension

MBZUAI ·

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.