Skip to content
GCC AI Research

Search

Results for "sampling"

Point correlations for graphics, vision and machine learning

MBZUAI ·

The article discusses the importance of sample correlations in computer graphics, vision, and machine learning, highlighting how tailored randomness can improve the efficiency of existing models. It covers various correlations studied in computer graphics and tools to characterize them, including the use of neural networks for developing different correlations. Gurprit Singh from the Max Planck Institute for Informatics will be presenting on the topic. Why it matters: Optimizing sampling techniques via understanding and applying correlations can lead to significant advancements and efficiency gains across multiple AI fields.

Golden Noise and Ziazag Sampling of Diffusion Models

MBZUAI ·

Dr. Zeke Xie from HKUST(GZ) presented research on noise initialization and sampling strategies for diffusion models. The talk covered golden noise for text-to-image models, zigzag diffusion sampling, smooth initializations for video diffusion, and leveraging image diffusion for video synthesis. Xie leads the xLeaF Lab, focusing on optimization, inference, and generative AI, with previous experience at Baidu Research. Why it matters: The work addresses core challenges in improving the quality and diversity of generated content from diffusion models, a key area of advancement for AI applications in the region.

Gaussian Variational Inference in high dimension

MBZUAI ·

This article discusses approximating a high-dimensional distribution using Gaussian variational inference by minimizing Kullback-Leibler divergence. It builds upon previous research and approximates the minimizer using a Gaussian distribution with specific mean and variance. The study details approximation accuracy and applicability using efficient dimension, relevant for analyzing sampling schemes in optimization. Why it matters: This theoretical research can inform the development of more efficient and accurate AI algorithms, particularly in areas dealing with high-dimensional data such as machine learning and data analysis.

Understanding ensemble learning

MBZUAI ·

An associate professor of Statistics at the University of Toronto gave a talk on how ensemble learning stabilizes and improves the generalization performance of an individual interpolator. The talk focused on bagged linear interpolators and introduced the multiplier-bootstrap-based bagged least square estimator. The multiplier bootstrap encompasses the classical bootstrap with replacement as a special case, along with a Bernoulli bootstrap variant. Why it matters: While the talk occurred at MBZUAI, the content is about ensemble learning which is a core area for improving AI model performance, and is of general interest to the AI research community.

Monitoring the Kingdom’s consumable fishery products

KAUST ·

KAUST and KACST have partnered to assess the safety of seafood from the Red Sea and Arabian Gulf, with KACST funding an environmental contaminants lab at KAUST. Researchers from KAUST's Coastal & Marine Resources Core Lab (CMR) collect samples, which are then analyzed by the Analytical Chemistry Core Lab (ACL). The project aims to determine the exposure status of the Saudi population to environmental contaminants and provide recommendations on safe seafood consumption. Why it matters: Ensuring the safety of consumable fishery products is crucial for public health and food security in Saudi Arabia.

The role of data-driven models in quantifying uncertainty

KAUST ·

KAUST Professor Raul Tempone, an expert in Uncertainty Quantification (UQ), has been appointed as an Alexander von Humboldt Professor at RWTH Aachen University in Germany. This professorship will enable him to further his research on mathematics for uncertainty quantification with new collaborators. Tempone believes the KAUST Strategic Initiative for Uncertainty Quantification (SRI-UQ) contributed to this award. Why it matters: This appointment enhances KAUST's visibility and facilitates cross-fertilization between European and KAUST research groups, benefiting both institutions and attracting talent.

Distillation Policy Optimization

arXiv ·

The paper introduces a novel actor-critic framework called Distillation Policy Optimization that combines on-policy and off-policy data for reinforcement learning. It incorporates variance reduction mechanisms like a unified advantage estimator (UAE) and a residual baseline. The empirical results demonstrate improved sample efficiency for on-policy algorithms, bridging the gap with off-policy methods.

Finding true protein hotspots in cancer research

KAUST ·

KAUST researchers developed a statistical approach to improve the identification of cancer-related protein mutations by reducing false positives. The method uses Bayesian statistics to analyze protein domain data from tumor samples, accounting for potential errors due to limited data. The team tested their method on prostate cancer data, successfully identifying a known cancer-linked mutation in the DNA binding protein cd00083. Why it matters: This enhances the reliability of cancer research at the molecular level, potentially accelerating the discovery of new therapeutic targets.