Skip to content
GCC AI Research

Search

Results for "AI diffusion"

VENOM: Text-driven Unrestricted Adversarial Example Generation with Diffusion Models

arXiv ·

The paper introduces VENOM, a text-driven framework for generating high-quality unrestricted adversarial examples using diffusion models. VENOM unifies image content generation and adversarial synthesis into a single reverse diffusion process, enhancing both attack success rate and image quality. The framework incorporates an adaptive adversarial guidance strategy with momentum to ensure the generated adversarial examples align with the distribution of natural images.

The state of global AI diffusion in 2026 - The Official Microsoft Blog

The National ·

Microsoft's official blog outlines projections for the state of global AI diffusion by 2026, detailing expected trends in enterprise adoption and societal integration across various industries. The analysis likely discusses key factors influencing AI's spread, such as data availability, infrastructure development, and the evolving talent landscape. It also probably addresses the transformative potential of AI while highlighting challenges related to ethical governance and equitable access to advanced technologies. Why it matters: This analysis provides a significant global perspective on future AI adoption, offering insights that can inform strategic planning for governments and businesses in the Middle East looking to develop their AI ecosystems.

ScoreAdv: Score-based Targeted Generation of Natural Adversarial Examples via Diffusion Models

arXiv ·

The paper introduces ScoreAdv, a novel approach for generating natural adversarial examples (UAEs) using diffusion models. It incorporates an adversarial guidance mechanism and saliency maps to shift the sampling distribution and inject visual information. Experiments on ImageNet and CelebA datasets demonstrate state-of-the-art attack success rates, image quality, and robustness against defenses.

SemDiff: Generating Natural Unrestricted Adversarial Examples via Semantic Attributes Optimization in Diffusion Models

arXiv ·

This paper introduces SemDiff, a novel method for generating unrestricted adversarial examples (UAEs) by exploring the semantic latent space of diffusion models. SemDiff uses multi-attribute optimization to ensure attack success while preserving the naturalness and imperceptibility of generated UAEs. Experiments on high-resolution datasets demonstrate SemDiff's superior performance compared to state-of-the-art methods in attack success rate and imperceptibility, while also evading defenses.

Teaching AI to predict what cells will look like before running any experiments

MBZUAI ·

MBZUAI researchers have developed MorphDiff, a diffusion model that predicts cell morphology from gene expression data. MorphDiff uses the transcriptome to generate realistic post-perturbation images, either from scratch or by transforming a control image. The model combines a Morphology Variational Autoencoder (MVAE) with a Latent Diffusion Model, enabling both gene-to-image generation and image-to-image transformation. Why it matters: This could significantly accelerate drug discovery and biological research by allowing scientists to preview cellular changes before conducting experiments.

Golden Noise and Ziazag Sampling of Diffusion Models

MBZUAI ·

Dr. Zeke Xie from HKUST(GZ) presented research on noise initialization and sampling strategies for diffusion models. The talk covered golden noise for text-to-image models, zigzag diffusion sampling, smooth initializations for video diffusion, and leveraging image diffusion for video synthesis. Xie leads the xLeaF Lab, focusing on optimization, inference, and generative AI, with previous experience at Baidu Research. Why it matters: The work addresses core challenges in improving the quality and diversity of generated content from diffusion models, a key area of advancement for AI applications in the region.

Image generation and manipulation research at VinAI

MBZUAI ·

VinAI Research presented research projects focused on advancing image generation and manipulation using GANs and Diffusion Models. The research aims to improve GANs regarding utility, coverage, and output consistency. For Diffusion Models, the work focuses on improving the models’ speed to approach real-time performance and prevent negative social impact of diffusion-based personalized text-to-image generation. Why it matters: This talk indicates ongoing research and development in generative AI in Southeast Asia, an area of growing interest globally.