Skip to content
GCC AI Research

Search

Results for "semi-supervised learning"

Self-Supervised Learning AI and AI for Molecular Biology

MBZUAI ·

Xiao Wang from Purdue University presented research on Adversarial Contrastive Learning (AdCo) and Cooperative-adversarial Contrastive Learning (CaCo) for improved self-supervised learning. He also discussed CryoREAD, a framework for building DNA/RNA structures from cryo-EM maps, and future work in deep learning for drug discovery. Wang's algorithms have impacted molecular biology, leading to new structure discoveries published in journals like Cell and Nature Microbiology. Why it matters: The research advances AI techniques for crucial tasks in molecular biology and drug discovery, with potential applications for institutions in the GCC region focused on healthcare and biotechnology.

Learning with Noisy Labels

MBZUAI ·

This article discusses methods for handling label noise in deep learning, including extracting confident examples and modeling label noise. Tongliang Liu from the University of Sydney presented these approaches. The talk aimed to provide participants with a basic understanding of learning with noisy labels. Why it matters: As AI models are increasingly trained on large, noisy datasets, techniques for robust learning become crucial for reliable real-world performance.

Upsampling Autoencoder for Self-Supervised Point Cloud Learning

arXiv ·

This paper introduces a self-supervised learning method for point cloud analysis using an upsampling autoencoder (UAE). The model uses subsampling and an encoder-decoder architecture to reconstruct the original point cloud, learning both semantic and geometric information. Experiments show the UAE outperforms existing methods in shape classification, part segmentation, and point cloud upsampling tasks.

MBZUAI researchers at ICML

MBZUAI ·

MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.

Towards open and scalable AI-powered waste detection

MBZUAI ·

MBZUAI researchers tackled the challenge of AI-powered waste detection in messy, real-world recycling facilities. They fine-tuned modern object detection models on real industrial waste imagery and combined this with a semi-supervised learning pipeline. Fine-tuning more than doubled performance and their semi-supervised pipeline outperformed fully supervised baselines. Why it matters: This research offers a practical path for open research that can rival proprietary systems while reducing the need for costly manual labeling in waste management, a problem of global importance.

Provable Unrestricted Adversarial Training without Compromise with Generalizability

arXiv ·

This paper introduces Provable Unrestricted Adversarial Training (PUAT), a novel adversarial training approach. PUAT enhances robustness against both unrestricted and restricted adversarial examples while improving standard generalizability by aligning the distributions of adversarial examples, natural data, and the classifier's learned distribution. The approach uses partially labeled data and an augmented triple-GAN to generate effective unrestricted adversarial examples, demonstrating superior performance on benchmarks.

On Transferability of Machine Learning Models

MBZUAI ·

This article discusses domain shift in machine learning, where testing data differs from training data, and methods to mitigate it via domain adaptation and generalization. Domain adaptation uses labeled source data and unlabeled target data. Domain generalization uses labeled data from single or multiple source domains to generalize to unseen target domains. Why it matters: Research in mitigating domain shift enhances the robustness and applicability of AI models in diverse real-world scenarios.

Contrastive Pretraining for Echocardiography Segmentation with Limited Data

arXiv ·

This paper introduces a self-supervised contrastive learning method for segmenting the left ventricle in echocardiography images when limited labeled data is available. The approach uses contrastive pretraining to improve the performance of UNet and DeepLabV3 segmentation networks. Experiments on the EchoNet-Dynamic dataset show the method achieves a Dice score of 0.9252, outperforming existing approaches, with code available on Github.